Hacker Newsnew | past | comments | ask | show | jobs | submit | spydum's favoriteslogin

"browsers using the ExtensionManifestV2Availability policy will be exempt from any browser changes until June 2025"

To extend ManifestV2 in Chrome, add the text below to a text file, saving and running it as a .reg will create and add a value of 2 to "ExtensionManifestV2Availability" in the HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome key

(When you open/run a .reg file, it updates your registry, usually preceeded by a warning.)

Alternatively, you could do this manually by pressing the Windows key, type "run" (without the quotes) and enter, type "regedit" (without the quotes) and enter, then navigate as far as you can to HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome key

You may find there is no "Chrome" key and will need to create it, as well as creating ExtensionManifestV2Availability

--------------------------------------

  [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]


  "ExtensionManifestV2Availability"=dword:00000002

In another article there by Didier Stephens [0], he posts a link to his GitHub [1] where he provides numerous such tools. Very interesting stuff!

    [0] String Obfuscation: Character Pair Reversal
        https://isc.sans.edu/diary/String%20Obfuscation%3A%20Character%20Pair%20Reversal/29654

    [1] https://github.com/DidierStevens/DidierStevensSuite

One thing I've done to identify infrequent log entries within a log file is to remove all numbers from a file and print out a frequency of each. Basically just helps to disregard timestamps (not just at the beginning of the line), line numbers, etc.

  cat file.log | sed 's/[0-9]//g' | sort | uniq -c | sort -nr
This has been incredibly helpful in quickly resolving outages more than once.

As a PSA: this book is old enough to be public domain, the audiobook is (legally) available for free on LibriVox: https://librivox.org/the-brothers-karamazov-by-fyodor-dostoy... . The written version is available on Project Gutenberg: https://www.gutenberg.org/ebooks/28054 .

I listened to it while swimming laps. The readers have great, rhythmic pacing.



I highly recommend this "Let's Read" thread on James Bond from SomethingAwful : https://forums.somethingawful.com/showthread.php?threadid=38...

My takeaway on Bond is that he's way more interesting as a flawed protagonist, much in the same way Fleming himself was. Through the movies, he became symbolic and idolized in a way that I don't think Fleming intended.

The Casino Royale movie reboot was pretty great IMHO. The direction Bond is going now is pretty hit and miss. I'm guessing every few decades we'll need to tear down the mythology and start over with something better grounded in reality, and give a reminder of how imperfect Bond can be.


https://t.me/freedomf0x/12553

I haven't checked the content myself, but this tg channel is usually legit


An employee, possibly. The whole company, unlikely. And either way, even if someone was bribed to introduce the attack there's zero reason to allow the hacked software to be downloaded now.

I work at a large and highly regulated (HIPAA) company and we have the equivalent of Electric Dylan/Pete Seeger with the axe: if someone at the VP+ level declares a major incident, our infosec team has a script that will lock down all inbound/outbound traffic, snapshot all our running machines for later forensics, lock our AWS IAM access down to a single incident response account, and move DNS for our web properties to a "we've been hacked" page. (OK, it obviously doesn't say that, but something similar that has been heavily vetted by legal and marketing ;-)). We've drilled and timed it out and can stop the ship in ~5 minutes.

Either SolarWinds doesn't have a major security incident response plan, or they don't have the stomach to pull the trigger. Neither is promising.


Been thinking of explaining technical debt using a book library as an example...

Say you want to start a lending library, you hire one person and stock 25 books. The stock is small and one person can easily remember all of them so the employee just piles them up in a corner. If a customer wants fiction or literature or whatever, the employee could easily look up the pile and pull it out.

Over time the books grows from 25 to 50, the stocks still small and there's just one employee so they are all just added to the pile.

50 grows to 150, you hire one more. The old guy feels that since there are two of them one can lookup the first 75 while the other can search the next 75 and organizing would a waste of time and space.

When you hit 500 books the debt starts to kick in but again you try to solve it by hiring more people. Some of the new hires want to categorize the books into proper racks but that would mean shutting shop for few days and not adding any new books. This is unacceptable to a non technical manager so things continue to be the old way.

By the time you hit 1000 books some of the employees are fed up with the time consuming work and quit, the replacements have no clue what is where. Most of the customers were served purely based on the muscle memory of the old employees and now that they have gone the business starts to crumble.


From my experience of reviewing Kubernetes deployments for security here's where I'd start on securing Kubernetes.

- Make sure that all the management interfaces require authentication, including the Kubelet, etcd and API Server. some distributions don't do that consistently and from all perspectives. Whilst the API server generally is configured like this, I've seen setups where either etcd and/or the Kubelet are not and that's generally going to lead to compromise of the cluster.

- Ensure that you've got RBAC turned on and/or stop service tokens being mounted into pods. Having a cluster-admin level token being mounted into pods by default is quite dangerous if an attacker can compromise any app. component running on your cluster.

- Block access to metadata if your running in the cloud. For example, if you're running your k8s cluster on EC2 VMs any attacker who compromises one container, can use the metadata service to get the IAM token for the EC2 machine, which can be bad for your security :) this is likely to be done with Network Policy, so you can use that to do things like block access from the container network to the Node IP addresses as well.

- Turn off unauthenticated information APIs like cAdvisor and the read-only kubelet port, if you don't need them.

- Implement PodSecurityPolicy to reduce the risk of containers compromising the hosts


Some excellent talks on defense in depth with Kubernetes:

- Hacking and hardening Kubernetes Clusters by Example by Brad Geesaman -> https://www.youtube.com/watch?v=vTgQLzeBfRU

- Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes by Greg Castle -> https://www.youtube.com/watch?v=ohTq0no0ZVU


I use a posh script like this when i dont want the computer to lock.

$ws = New-Object -COM wscript.shell;

while($true){ $ws.SendKeys("j"); Sleep 60;}

Ive used it for demos that way the computer doesnt lock before the demo starts, its pretty short and easy to remember. Also on windows if you spam SendKeys("{Left}") everything you type is backwards and when you hit the windows key it freezes the computer in an interesting way, pretty fun.


> Google does a good job of building a sense of community within the organization.

That is the true test. You are supposed to pretend only to buy into it but never really believe it. Understanding that things function on two levels is critical. One level is the superficial "we are a family, community, we are not evil, making the world better". But that's the trap to catch all the naive people and extract extra work hours from them (possibly at the expense of family or personal time).

There is a second level of unspoken rules - "it really is about business and internal politics". You are supposed to discover and navigate a set of unwritten rules. And these usually don't get spelled out for you, because they are kind of ugly and often diametrically opposed the official rules from the first level.

Slavoj Zizek likes to talk about this when he talks about institutional ideology and how there are rules and meta rules. The meta rules dictate how you relate to the official rules. Which ones you are supposed to break to get ahead, for instance. The other side is that you are given permission to do something but you are not really allowed to take advantage of that or you get in trouble. For example the whole "take any vacation time you want, we don't have fixed days". But you are expected to not really take more than a few or you'll be laid off eventually.

Here is an excerpt where he talk a bit about that: https://www.youtube.com/watch?v=pfO9gL28pAs (warning, he likes to use gross jokes and you might find his style unpalatable)


There a couple of aspects to this issue. I'm 53.

Management: I would argue that it's critical to garner management skills well before you turn 40. If you wait until 50, you're going to find it very difficult to move from talent-oriented jobs to management ones.

Coding: I hate to generalize, but it's very likely you'll learn so much over your career that you will become an ineffective developer. You will know how to do things well and will have a difficult time doing things just to get them done. It's fairly common for projects to need completion over correctness and quality. This is where younger developers are great. They don't know they're creating technical debt, so they have no angst over it. But you will and this is bad for the project and for you. You probably need to find a place in software development where you can mentor and lead, but reduce your involvement with actual day-to-day coding.

Challenges: I personally suffer from "it-must-be-challenging-or-I-get-bored" syndrome. The longer you write code, the harder this is to suppress and the more you look for shiny things to work on. This is bad for you because it's bad for your employer. If you don't suffer from this, you're amazing and any employer would love to keep you until you're dead.


This isn't, in any way, a new problem. I did a presentation on this topic for OWASP AppSecEU 2015 (https://www.youtube.com/watch?v=Wn190b4EJWk&list=PLpr-xdpM8w...) and when doing the research for that I encountered cases of repo. attacks and compromise.

IME the problem will continue unless the customers (e.g. companies making use of the libraries hosted) are willing to pay more for a service with higher levels of assurance.

The budget required to implement additional security at scale is quite high, and probably not a good match with a free (at point of use) service.


So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.

_________________

Do Read:

1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.

2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.

3. Security Engineering - You can probably read either this or The Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.

4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.

5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.

_________________

You Can Skip:

1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.

2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.

3. The Art of Deception - See above for Social Engineering.

4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.

_________________

What's Not Listed That You Should Consider:

1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.

2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.

3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.

4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.

5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.

6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.


Source code of the worm: https://hastebin.com/gubegaqusi.xml

Pretty much what you'd expect.

Edit: This isn't the full source code. There was another PHP file visible on their website that unfortunately isn't visible anymore.


I've used streisand on DO (while traveling in China) and it worked well. There's also a similar project called algo[1] which provides a single protocol with maximum security, in contrast to streisand's multi-protocol flexibility (and increased surface area).

https://github.com/trailofbits/algo


Heads up a simple yet production ready NGINX location block to proxy to a public s3 bucket looks like:

    # matches /s3/*
    location ~* /s3/(.+)$ {
        set $s3_host 's3-us-west-2.amazonaws.com';
        set $s3_bucket 'somebucketname'

        proxy_http_version 1.1;
        proxy_ssl_verify on;
        proxy_ssl_session_reuse on;
        proxy_set_header Connection '';
        proxy_set_header Host $s3_host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Authorization '';
        proxy_hide_header x-amz-id-2;
        proxy_hide_header x-amz-request-id;
        proxy_buffering on;
        proxy_intercept_errors on;
        resolver 8.8.4.4 8.8.8.8;
        resolver_timeout 10s;
        proxy_pass https://$s3_host/$s3_bucket/$1;
    }
Adding NGINX caching on-top of this is pretty trivial.

Also, heads up, in the directive proxy_cache_path, they should consider enabling "use_temp_path". This directive instructs NGINX to write them to the same directories where they will be cached. We recommend that you set this parameter to off to avoid unnecessary copying of data between file systems. use_temp_path was introduced in NGINX version 1.7.10 and NGINX Plus R6.

    use_temp_path=off
Also, they should enable "proxy_cache_revalidate". This saves on bandwidth, because the server sends the full item only if it has been modified since the time recorded in the Last-Modified header.

    proxy_cache_revalidate on;

My preferred method:

Add an "expires" field to the token, this should contain a date after which the token is no longer valid. Now all token s auto-invalidate after a certain period.

Allow some or all tokens to "refresh" by calling a particular endpoint (call with valid token and get a token with expiry from now).

Optionally add some form of identifier to the token (user_id works great) so that you can push a message out to your servers that looks like this: "All tokens for x expiring before y are invalid". Once time y has passed your server can forget about the message. This will be a very small set (often 0) as very few people use the "log out my devices" features.

Logouts should be done client side by deleting the token.

If you are worried about your token being sniffed you are either not using HTTPS, or sticking it somewhere stupid.


This was a good read. As a DevOps Engineer at a company that does not have a distinct Ops department, who's also a Tech Lead, I have some thoughts I'd like to share.

First, while the ultimate goal of any engineer (even not among the Ops disciplines) should be to automate yourself out of a job, we have seen time and again that it is impossible to do so, as any good engineer will continue to advance the state of the art. Conclusively, there is no "finish line" for operations that will not be obsolete within 3 years. The concern that you'll just have to migrate across organizations, reaching the "finish line," rinsing and repeating is a non-issue. The notion of a finish line is really sugarcoated FUD. (The author's interesting thought experiment alludes to this and does refute the argument, so hopefully my statements simply complement the article in that regard. I call it a thought experiment because we will never arrive at this "you automated everything" goal.)

I absolutely agree with the author that we do not really need ops engineers. We do, however, need specific disciplines of software engineering. Specifically, I recommend The Systems Engineering Side of Site Reliability Engineering[1] (as well as the book), Site Reliability Engineering[2] from Google. The usenix article in particular describes three distinct disciplines of software development: systems engineering, site reliability engineering and software engineering. The individuals behind the roles have little to do with the roles themselves (rather, the causal chain is the other way around); it's often misunderstood, for whatever reason, that software engineers and operations engineers have different skillsets because of who they are. This is true, but it does not mean that a software engineer cannot, in short order relative to individuals with no software background whatsoever, transition into an operations role, or vice-versa. Orthogonally, identifying individuals with the skills in any of these three disciplines is critical to placing them in work that is personally and professionally rewarding, as well as more valuable to the organization than if they were placed in some other discipline. And sometimes, individuals do not even know of these disciplines or, for whatever reason, think they are suited for a discipline that they are not actually best at. I was one of these people (a software engineer before moving into operations). In essence, what I'm trying to demonstrate here is that these disciplines of software development are permanent (or have generations-length longevity) and we should not be concerned with being replaced or becoming obsolete. Indeed, it is the specific tasks that will change over time. Consider, for example, electrical engineers. We do not anticipate that EE's will be replaced by robots. Despite robots automating the process of manufacturing circuits, EE's will always be invaluable and irreplaceable. However, their specific responsibilities will change over time. This is why I said before, advancing the state of the art results in new work (or even types of work) to exist. Finally -- and this is just a bonus -- any experience acquired by, say, an EE will be useful even if he or she transitions to a new discipline of engineering. In my experience, the best software engineers I ever known have understood in remarkable depth CPU architecture, memory models, networking protocols, configuration management, etc.

> there is practically no difference between a software engineer and an operations engineer - they both build and run their own systems - other than the domain: the software engineer builds and runs the top-level applications, while the (ex-)operations engineer builds and runs the infrastructure and tooling underneath the applications.

The above statement from the article's thought experiment vaguely describes two (of the three) software engineering disciplines that Hixson[1] talks about. Operations engineers and software engineers alike are, in this thought experiment, responsible for leveraging their expertise and talent (understand that I use the word talent according to the definition described by Hixson) at maximum efficiency. The manifestation of these disciplines is reflected in their domain, but the individual tasks themselves are only relevant today and will change tomorrow. The third discipline not described here (systems engineering) is very much relevant and deals specifically with the interactions among systems, which neither operations nor software engineers will focus on (or necessarily have significant talent in). Later in the article, the author sort of blends SRE (site reliability engineers) and SE (systems engineers) together. The distinction isn't important for the author to make her point, but I wanted to highlight it a little bit.

Second, I think the author describes an environment that strongly reflects the ideals of the DevOps movement. From my reading, I'm inferring that the author is aligned with these ideals. I consider this a big selling point if I ever wanted to consider Uber as a place of employment. As some other comments here on HN have noted: it is extremely rare and difficult to find an organization that has embraced DevOps principles with such purity. I'm fortunate to be employed at one of them (not Uber), and it sounds like Uber has made some good decisions as an organization in this regard. (Hopefully this paragraph can be to the benefit of any employment-seeking operations engineers. The statements in the article reflect positively on Uber, particularly if you are trying to move from a traditional operations role to a DevOps/SE/SRE role.)

Third, the article does a great job refuting the 3 identified arguments. In general, I can't agree more! The author takes the time to consider the merit of each argument and qualify the conditions under which they are true before refuting them, which makes it much easier to read coming from a more traditional organization. From my biased perspective, I don't even give these arguments the light of day and refute them without thinking twice about the qualifications that can alter their accuracy; so, one takeaway for me from the article has been to not make the assumption that these arguments are being made by like-minded individuals. It's quite likely that I'm too hard on people for bringing up concerns like these and, as a result, not open to new (old) ideas.

My final thought on the article is that, while it's not really news in most of the social circles I spend my time with (as a byproduct of having learned much of what I know from a stellar colleagues in a great work environment -- not because of any personal accomplishment), I really appreciate that the author took the time to write out these thoughts and publish them so that the broader software community can grow and adopt ideals that move our industry forward in a very positive, very significant way. So thanks to the author, and to aberoham who posted the link here on HN!

[1] https://www.usenix.org/publications/login/june15/hixson

[2] https://landing.google.com/sre/book.html


It's not like car safety. There's an enemy. The enemy is not random.

Patch and release only works against inept opponents, and there are more non-inept opponents. It won't work against opponents who can develop or buy their own exploits. Patch and release gives the illusion of working because it stops vast numbers of inept attacks from clowns who just want to send spam emails.

Someone in military base security once said that you have to avoid tying up too much of your resources chasing kids throwing rocks at the perimeter fence. The real threat is the janitor who has access to the spare parts stores. Patch and release, and pattern-matching virus scanners, are chasing kids throwing rocks.


Great work, though I wish this blog were published after the redactions were updated so the findings could be really public.

In particular, I really like this:

"When you submit bugs, remember that you aren't actually entitled to anything. Unfortunately, that's how bug bounties work. It's a sellers market. If a program doesn't pay as much as you'd expect for a bug, just don't participate in that program again. What's the point of causing drama over a bug or two? Who is the magic internet man who's going to buy your exploit for $1,000,000 using magic internet money (bitcoin) that those Hacker News users keep on referencing? If anyone knows who this person is, do tell me! There's no such thing as a "union" for bug bounty hunters nor an easily accessible secondary blackmarket that pays for your bugs in a company."

Fascinating. It's almost as though a successful security researcher and bug bounty participant is saying the same things Tom and I keep saying here, despite a constant hoard of people outside the industry who believe web application bugs are worth anything on the black market.

For anyone who would like to replicate this success, know the following:

1. The Web Application Hacker's Handbook is your friend. It's outdated but it's still the best foundation. Follow it up with The Browser Hacker's Handbook or The Mobile Application Hacker's Handbook.

2. You can optimize for exotic vulnerabilities in Google, Facebook et al which pay the highest bounties or you can optimize for low hanging fruit in companies which are just opening up a bounty program. The longer a company has a bounty program, the less of a chance you'll successfully XSS them in a day of assessing their web applications. Optimizing for finding security vulnerabilities serially in the highest paying bounty programs requires a much greater level of skill, vulnerability intuition ("what did the developers not think about or consider here?") and patience.

3. To do this seriously, learn to recognize vulnerabilities as contextual faults in the the way a particular software's implementation matches the developer's design expectations. Try not to think of vulnerabilities as classifications, like XSS, CSRF, IDOR, etc. For a first pass to find low hanging fruit it's helpful to proceed this way, but for more advanced work you want to look holistically.

4. If you want to specialize in application security at the binary level (sandbox escapes, memory corruption - vulnerabilities in operating systems and major OS applications), you'll want to learn reverse engineering. Start with Hacking: The Art of Exploitation and then proceed to Reversing: Secrets of Reverse Engineering or Practical Reverse Engineering. You'll also want to read The Art of Software Security Assessment and Security Engineering alongside those, to develop excellent source code auditing and binary penetration testing skills. Frankly you should read those no matter what you do in security.

5. It's very possible to earn a competitive tech salary just by finding vulnerabilities and reporting them (or selling them, if that's your bag, but, critically, you won't be selling website bugs). However, it's difficult and time consuming. It requires an orthogonal skill set to straight development, and while it might not be more time consuming it will certainly appear that way while you're doing it. Like much of security work, bug bounty hunting consists of long periods of research and analysis, during which you might feel like you haven't made any progress. This is interposed with brief moments of inspiration and a final moment of euphoria when you finally break something after poking at it for days or weeks or months.

You can earn a lot by finding lots of bug bounties, or you can do it by finding a few serious flaws in widely used "foundational" software in a year (think browsers, Flash, Rails, Windows, etc.). The latter method requires less context-switching, so if you can swing for that it's the way I'd recommend. All you need is three or four bugs in an OS or browser to match a Bay Area median salary.

As always, I'm very happy to help anyone who would like to get into security professionally.


In iOS 9 there is SFSafariViewController which doesn't give such control.

Some providers (e.g. Fitbit) require you use this instead of UIWebView in order to access their API, presumably to avoid accessing the DOM.

Not sure exactly how they are enforcing this though, since they don't provide their own OAuth library.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: