Although I'm certainly no celebrity / important likely target of hackers, I'm interested in this just because recently I've gotten paranoid about my financial accounts (after a company I used to work for finally went public and I was fortunate to cash out an amount of $).
When hackers use such exploits, do they then basically have something like remote control over your phone, and can start exfiltrating data / manipulating apps while you're not paying attention? Or what do they do with it then? Are you able to tell, by seeing your phone slow down or start to have unexplained screen behaviors? Suppose that I'm logged into my Google apps on my phone, does that mean they have access to all my gmail, google docs, etc. as well?
Do these kinds of exploits also exist/get used to target people on their laptops and desktop machines as well? Or is that a little less likely, since your phone is specifically identified with you and people can easily go after your known phone number?
I wonder if there is some resource where people can read about how to detect and avoid such exploits and protect against them?
Make sure your big $$$ are not available easily. Find a bank/brokerage that will actually do their job verifying you before they dispense your money.
You are not able to defend yourself from targeted attacks. Period.
It is one thing to try to defend from attacks of opportunity (ie. viruses, ransomware, etc.) and another from people who actually know their job and for some reason find yourself attractive target.
Thus, the best way to respond is to not make yourself attractive target in the first place and if you need to have attractive things somewhere -- separate them from everything else.
Putnam investments are really hard to get money out of. For example, I tried to cash in an annuity, and it required a medallion certificate by another bank. A medallion certificate is like a notary but is only done by another bank.
Depends how motivated the attackers are. They can try to find another bank with weaker rules, perhaps open an account there first.
I needed one of those things, and shopped around for a bit. And while all the big names would refuse, had waiting periods, fees, other requirements, a local credit union gave me one after signing up for a savings account immediately with a minimal or no fee.
>Depends how motivated the attackers are. They can try to find another bank with weaker rules, perhaps open an account there first.
That isn't really incentivized in this case. Assuming by medallion certificate they meant "medallion signature guarantee" [0] as established by SEC Rule 17 Ad-15 [1], a core part of the system there is that the financial institution granting it accepts liability for any forgery, up to a specified prefix amount (and the transaction will be rejected if the stamp isn't enough to cover the transaction amount). So if they "find another bank with weaker rules" who gets them to issue a stamp for a few hundred grand they are on the hook for that loss.
As a result, it's actually taken pretty seriously at least for significant amounts of money. This specific area isn't one where the guarantor gets to shrug their shoulders about it. Since they're going to be on the hook for hundreds of thousands to millions if they get it wrong, you need to be a known, established customer to even try, go in person, and someone higher level is absolutely going to looking at it personally. And even if an attacker did get past all that, the whole point is the one being attacked still hasn't lost anything.
>a local credit union gave me one after signing up for a savings account immediately with a minimal or no fee.
What prefix though? How did you check out in terms of signup (long history as resident? local connections?)? Lots of stuff goes on behind the scenes. An F alpha prefix ($100k surety, credit union) isn't the same thing as a Z ($14 million surety).
Some smaller credit unions or local banks are less rigorous it seems and that incentivizes whoever wants to steal the money to go there with a fake ID, bills, etc.
> What prefix though?
F. Still a nice chunk of money for someone to steal. I guess, what I was surprised about was how different the rules were. Some quite strict, at large institutions, some pretty lax.
> How did you check out in terms of signup (long history as resident? local connections?)?
Called around and visited a few banks in the area. Not sure if / how they checked the history as a resident? There was no prior relationship with that particular credit union. As you say, perhaps behind the scene they did a rather thorough background check, but it just didn't feel that way at all based on the context.
Give written instructions to your bank requiring them know to engage in transactions over a certain amount without a certain set of verification procedures (for example a call back with a prearranged password for any wire was one that I had with my old bank), and have them acknowledge receipt in writing as well. In the unlikely/unfortunate event that your money is stolen - recouping from the financial institution will be more straightforward if they didn’t follow procedures and you can prove it.
Arguing from first principles, the first step in detecting a problem is to know your device's baseline operation. This means knowing the bevvy of processes that are running, the resources they use, and the messages they send and to which hosts. With this baseline, you can now see if something is going wrong - a process you don't recognize, connecting to hosts you don't recognize, and so on.
Of course, this is also the step where almost everyone, including devs, fail. How many devs know their phone to this degree? Even for our laptops, we tend to give way too much leeway to 3rd party binaries, and allow the environment to get so noisy that any kind of signal is impossible to detect. It's a depressing trade-off we (almost) all make for convenience, using the (almost) good enough assumption that we're safe in the herd. It's actually a very, very dumb assumption and I feel like it's something of a hacker golden age because of it, and as long as they don't get too greedy and spook the herd, the gravy train is here to stay.
On argument against doing even this is that a hacker can take steps to hide their process. This has happened on PCs, with rootkits that hide certain processes. This may happen for phone malware, if only to make it harder to automate detection and removal, if not to guard against watchful users (of which there are precious few).
In terms of capability, I speculate that the best an attacker can achieve is a sticky, privileged process that accepts arbitrary commands at runtime, which can be used to read the disk, analyze other running processes, install and exfil sensor data, etc. From the attacker's POV for high value targets, it probably feels like ssh'ing into a mystery box, and "see what you can do" - and they probably have a (growing) library of scripts to check for easy, juicy things. (I would guess that they would hate to see bespoke applications that have to be understood and reversed to get value out of.)
>In terms of capability, I speculate that the best an attacker can achieve is a sticky, privileged process that accepts arbitrary commands at runtime, which can be used to read the disk, analyze other running processes, install and exfil sensor data, etc.
The worst-case scenario would be if the attacker somehow manages to rewrite your motherboard and/or SSD's firmware with a malicious firmware. And even if you reinstall your OS - he still manages to re-install the rootkit afterwards. I've only read about such type of malware but never have I seen or heard of anything like that in the wild.
And this is why I hate this whole mobile app thing. Sure show me the data, but for any actual transfer action I much prefer having to use the old fashioned one-time password list. At least then they need multiple things and it is not remote.
I have a separate phone specifically used for banking (since banks require me to install their 2fa app on my phone) and have a unique sim card that I only use for banks.
It's not 100% foolproof I guess but at least it reduces my risk.
Check US Mobile. I haven't used them, but I've noticed when shopping for cheap plans that they let you build a plan that's really cheap if you don't need much.
If you care about this, consider using a security-oriented OS on desktop based on hardware virtualization: https://qubes-os.org. In this case, if you use your phone only to confirm the transactions (as the second factor), you should be safe enough.
How can you prove that this is more secure against state level actors than iOS which have billions (?) of users? In modern phones there are multiple levels of sandboxing already. If some state really wants to target you, I would say that this is more insecure solution.
The most common OS are very heavily tested because of the user amount. These "secure" operating systems have niche amount of users which further reduces the amount of testing. And this is the only helping factor you - it is more beneficial to target operating systems which have a larger adaption. You need to be on high priority that they start developing exploits only for you who is using some random OS.
> How can you prove that this is more secure against state level actors than iOS which have billions (?) of users?
By comparing the number of exploits? Qubes relies on Xen, which is used by very big targets, so should be under constant attacks. Qubes uses hardware (VT-d) virtualization, which AFAIK was last time broken by the Qubes founder in 2003: https://en.wikipedia.org/wiki/Blue_Pill_(software).
> By comparing the number of exploits? Qubes relies on Xen, which is used by very big targets, so should be under constant attacks
This is often giving quite misleading conclusions based on what I just said - iOS for example is much more popular and heavily tested - of course the amount of exploits is much larger, because it is also much more interesting target as many are using it.
How many people are using phones/laptops which are based on Xen? Xen is commonly used on server side - not by those guys who are holding the interesting stuff on their personal devices.
I would argue that iOS is more dangerous because we can be fairly certain that it's not only vulnerable to exploits like Pegasus, but also phones home to FIVE EYES on a regular basis. Qubes is vulnerable to neither of these attacks, and it's architecture is explicitly designed to isolate all components of the system with hardened hypervisor technology used by the most high-security servers in the world. For the most part, you don't even have to trust the device you're running Qubes on; the isolation technology is that robust.
What kinds of bank accounts are not subject to allowing outbound ACH transfers? It is my understanding that if you have a bank account (checking) or even brokerage accounts, they can be withdrawn with ACH.
The capabilities depend on the specific exploit but if you're dealing with something like Pegasus, the answer is yes to almost all those questions.
> I wonder if there is some resource where people can read about how to detect and avoid such exploits and protect against them?
Protecting against the cutting edge of current nation-state attacks [1] is... well, it's not impossible but it's up there. Just don't be important/interesting enough to catch their wrath is the TL;DR.
Open two more accounts at your bank. Code the first accounts as "deposit only", automatically reject any withdrawal requests against that account. Code this account for sweep to the second account every night. Activate positive pay[0] on the second account. Do not ever give anyone access to the second account. Code the 3rd account for positive pay. That's the only account you are going to withdraw the money from.
Stop using debit cards. Only use credit cards. Pay them via positive pay from the 3rd account.
Stop using pull. Only use push. Only target the 3rd account with positive pay as the source of funds.
You should look at family office setups anyway. It used to be something that was done at 100M level but these days the services became cheap enough that it makes sense at 10M level.
[0] Switch to a bank that supports positive pay for all electronic transactions.
When I received a huge amount of cash some years ago, my outlook changed completely. The first thing to do was to split the money in order not to keep everything in one basket. I choose banks with unvieldy, problematic protection schemes that are awkward to use. And I set up a dedicated old laptop for banking (which still works).
My biggest paranoia wasn't about remote access though. I was really afraid someone could counterfeit my ID and just cash out as much as they could get away with. Fortunately, it hasn't happened to me.
Someone who has access to Pegasus is not going after finances. I had modest amount of ethereum on my PC, was hacked, but I still had control over my wallet.
If you have $1M+ it should not be tied to your sim card, GMail account etc... If you use the same device to access your bank accounts, and to browse internet or receive messages, you are like an idiot who does not do backups!
> If you use the same device to access your bank accounts, and to browse internet or receive messages, you are like an idiot who does not do backups!
Using ones phone for banking, internet, and sms is completely normal. Saying that 99.9999999999999999% of the world population that owns smartphones is an idiot isn't helpful.
The idiots are the governments of the world that haven't sanctioned Israel for allowing the continued trade of these cyberweapons by their citizens.
It's dangerous tossing that term around. There is enough real antisemitism in the world, and it's a real problem, we don't need to make-pretend extra. Critisism of the state of Israel does not equate antisemitism.
I don't doubt that some hide their hate behind that, but that does not mean that all critisism of Israel is "hidden antisemitism". Interpreting what peoples "real feelings" are from such a small post is pretty complicated, and since antisemitism is very serious you should not throw those accusations around easily. I can't see anything antisemitic in the post you called antisemitic. What is it actually you think was antisemitic about it?
> I don't doubt that some hide their hate behind that, but that does not mean that all critisism of Israel is "hidden antisemitism".
We agree. Reread my post and it should be clear as I write "too many", not "everyone" or anything like that.
> I can't see anything antisemitic in the post you called antisemitic. What is it actually you think was antisemitic about it?
I didn't say that. It was a response to the generic wording of that post. Edit: Above I replied to your reply to to throw8932894. Are you confusing me for throw8932894?
1. The state of Israel is the only state in the area where both Jews and Arabs are welcome and have a place in government and legislative bodies. Much of the "legitimate criticism" of Israel isn't directed at the Arabs in Israel it seems to I claim thinly veiled hate against the Jewish part of the population.
2. If one argues that it is against the Jewish part of the population because they dominate then one cannot say it is against the state of Israel only because then the difference doesn't mean anything.
Genuinely asking, my understanding is that Israel the government considers itself a primarily Jewish ethnostate with policies in place to evict Arabs from their lands in order to put Jewish people in there. My understanding is that this is where a lot of criticism and advocacy for Palestine comes from. In that case, in order to criticize the treatment of Palestinians, one would be criticizing a policy that benefits primarily the Jews of Israel. By this logic, it is antisimetic to point out human rights abuses that benefit Israeli Jewish citizens?
(I'm a layperson who isn't highly educated on this, and I'm aware that this conversation is complex and filled with nuance. I'm primarily asking to be educated about the matter based on the priors I have been told in the past.)
Remember that while I feel sorry for both parts I'm heavily biased so don't accept anything I write at face value but check it. On the other hand, unlike mainstream media and many who "support the Palestinian[1] cause" I'll be up front about it and ask you to verify yourself without referring you to more heavily biased sources.
> with policies in place to evict Arabs from their lands in order to put Jewish people in there.
I cannot defend everything Israel does but the last time I can remember there was a lot of fuzz on HN about evicting Arabs to give land to Israelis it was about giving back land to the families whos property was stolen and given to Arabs in the brief time where Jordan occupied it.
Also remember that there used to be a whole lot of Jews in the lands surrounding Israel. These partially moved voluntarily, partially where driven out harshly.
Meanwhile Arabs got to stay in Israel[2].
In fact more Jews were moved into Israel from surrounding countries than Arabs expelled from Israel so theoretically, if Arabs wanted, they could have given the properties of the Jews that fled from their countries to the Arabs that came from Israel.
That didn't happen as the Arabs never accepted UNs plan. So the neighbouring countries put their relatives in camps while waiting to "shove the Jews into the sea", Israel welcomed their own people and got them integrated with homes and a place to work. As time went by I think it became convenient to keep them there as a chess pawn.
[1]: I consequently use the word Arabs except here. There has never been a country named Palestine, just a Roman administrative province and later a fiction fueled by crafty journalists that saw that the story about small Israel against the Arab world would put Israel in a good light while "big" Israel (it is the size of a small county in Norway) against the poor "Palestinians"[3] in the camps.
[2]: Part of this seems to be a cynical plot by the Israelis. They asked them to stay I understand because if they all left, all the neighbouring countries could just walk in and shoot everything that lived.
[3]: Actually Arabs, just living inside the borders of the small part that UN/UK gave to the Jewish part of the population.
PS: Again, I'm heavily biased. I write to make you see it from my side. I have been caught in factual errors before. When that happen and I can verify it I have apologized and I try to not repeat those mistakes, again unlike mainstream journalists.
Govt of Israel has sure power over NSO operations.
>A yearlong Times investigation, including dozens of interviews with government officials, leaders of intelligence and law-enforcement agencies, cyberweapons experts, business executives and privacy activists in a dozen countries, shows how Israel’s ability to approve or deny access to NSO’s cyberweapons has become entangled with its diplomacy. Countries like Mexico and Panama have shifted their positions toward Israel in key votes at the United Nations after winning access to Pegasus. [1]
2FA is a good option for securing your centralized accounts. But unfortunately, if you're logged in on your phone and your phone is hacked, well, it's still game over.
For crypto currencies it may help to store them on a hardware wallet, since accessing your money will require explicit interaction. But, as far as I understand (please correct me, not up to date with the security mechanisms of hardware wallets), if your computer is compromised while doing it, you can still lose it.
Just for people who don’t know, it’s shows relevant data regarding the transaction: Sum, currency, target address.
Now, if you verify that data, you are safe… if the original address was correct. But as we are talking about a sophisticated targeted attack, where did you get the original address from? Because if it was your phone or your computer, we are back to step one, as that might already be manipulated.
The advantage of centralized services tied to your clear identity is that they do some diligence to ensure the person accessing your account is actually you. You (often) even have a reasonable recourse to undo things that have been done fraudulently.
It’s regular government employees who get access to Pegasus. I’d be shocked if it had never been used in an unauthorized manner for straight up financial crimes.
It's actually easier for an attacker on desktops/laptops. Phones have excellent sandboxing and defenses by comparison.
The ideal attacker would find a way to silently steal a credential (e.g. session cookie) from your phone, then use it on a different device. That's not going to be something that makes a lot of noise on your device itself.
Once an attacker has gained root access to the device they can:
1. Access any data on the device regardless of application security (including applications that may request a separate password be entered as this password entry can be captured). This access includes logging into web services (e-mail included) pretending to be your phone and downloading or manipulating information stored or transmitted by the service.
2. Enable the microphone and cameras at any time.
3. Track location at any time via enabling GPS or monitoring for nearby WiFi, Bluetooth of cell tower device IDs.
4. Modify the user interface to report incorrect status, for example, incorrect battery level, GPS disabled when it's really enabled, incorrect data transfer amounts, etc.
5. Connect to other devices via WiFi or Bluetooth and interrogate them to find other devices or people nearby, and potentially attack those devices too.
An attacker can achieve similar outcomes with root access into other electronic devices--laptops, tablets, desktop computers, watches, TVs, WiFi-enabled LED light bulbs, home appliances, cars, etc. Obviously what can do with a device depends on the sensors contained within (must have a camera sensor to secretly take images or video).
Unless the attacker is reckless with turning on the video camera, microphone, GPS, WiFi, Bluetooth, etc all at once and draining battery much faster than expected, or transferring large amounts of data, you generally wouldn't notice anything different about your device. You would also probably have a hard time actively detecting the attack as the implant would be constantly watching for signs of debugging/investigation and disable/delete itself in such situations.
Generally the implant would be non-persistent only residing in volatile memory of the device that would be forgotten soon after the device is powered down. Regardless of persistence, capturing the implant from the device would be very difficult and expensive to perform for an individual, but within reach of a state actor or security researchers with a lot of time on their hands to accomplish should they have the patience.
It an implant were to be persistent and survive a device reboot, you could rapidly turn off the device (physically cut power from the battery), desolder the non-volatile memory chips and recover data similar to the process shown in [1]. There would be more steps involved if the device is encrypted (for example key is held in a TPM) but as you know the password to unlock the device, you could just ask the TPM nicely to give up the key. Failing that, there is FIB editing or other attacks against TPMs to recover keys. See [2] and [3] for some examples.
To detect a non-persistent implant, you'd follow a similar process but would have to quickly cool the volatile memory chip (see [3]) and then cut lines to the chip and insert new probes instead of desoldering it (the heat from desoldering would result in the volatile memory being cleared too quickly). Apple's Secure Enclave processor, as an example of a growing trend, encrypts and decrypts blocks of data stored in and retrieved from volatile memory so you'd additionally need to attack the Secure Enclave processor to retrieve the required keys to decrypt the volatile memory with.
The irony is that same security features which are designed to keep your device secure also inadvertently makes it prohibitively time consuming and expensive to inspect your device to detect a hidden implant in use. If you're concerned you could be a target (investigative journalists for example), the best approach is probably to assume the device is always compromised and use non-technological approaches to avoid or frustrate an attacker. Or perhaps you could find security researchers who'd love nothing more than finding and unraveling the secrets of a sophisticated implant (see [5]).
[4] https://www.youtube.com/watch?v=Ej-Nr79bVjg (Cold Boot Attacks on Encryption Keys | J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten | 17th USENIX Security Symposium 2008)
NSO is just the one that sells a fully weaponized product but many companies out there are capable of selling you exploits with similar capabilities.
Like Zerodium,Immunity Inc etc etc
so much this. the discussion around NSO (specifically in Israel this past week) has become so exhausting
NSO marketing enjoy the fact they are shown as some super powered company who has been able and always will be able to get full control of every phone on earth. One dramatic news investigation showed exclusive video of NSO branded server racks[1] in an African country. Who cares about the servers? All pegasus needs is an internet connection, you could probably run it from a Chromebook
As the NSO 0-day bank has changed over the years, so have their capabilities. The NSO of 3 years ago is not the same as today and is not the same as the 2023 version. These 0-days might be known at 100 other companies with less aggressive marketing arms
I propose that any article like this don't refer to it as "NSO spyware", but instead refer to it as "Israeli spyware".
The reality is that while NSO Group is a private company, it has deep links to the Israeli government and generally doesn't allow it's services to be used against the interests of the Israeli state.
Hiding behind a corporate name to maintain Israel's reputation in international media isn't really okay.
I would suggest waiting, since there are news about a deal to sell NSO to a US venture fund. We could then refer to it as "US spyware" and skip the renaming part.
Sounds reasonable. But we have to be fair. We should start referring to Google as "American data gatherers" and Meta as "American efforts to improve lives by showing more relevant ads to people"/s. Don't pretend that they don't both have deep links in American politics.
And as has been noted before, similar relationships w/ the CIA/NSA that NSO does w/ Israeli intelligence. A lot of people shuffle back and forth between those organizations.
Not as "scary" because they aren't selling exploits to nation states and instead spying on their users/the internet.
I apologize. It seems that you entered your comment right where the other comment was moments before. I saw your comment already grayed due to downvotes and since the previous comment gained some downvotes, I wrongly assumed that it was edited.
The original comment was made by the user iqanq. Once again, I'm sorry.
Painting critics of the Israeli Government as anti semite is among the most used smearing tactics to damage their credibility. Therefore, by the same reasoning, you could be accused of acting as a shill for the Mossad.
These articles are kinda pointless. Every diplomat knows the score right now. Every state worth talking about is in whatever phone they want to be in. This is why the Russian ambassador memorizes code words in order to communicate with Moscow. People have basically given up trying to keep vital secrets over a phone.
What about patching the vulnerabilities rather than running around with anger? We can sue the hell out of them and convince the Israeli government to ban them altogether but this is obviously doomed to repeat, somebody will inevitably take the place sooner or later, legally or illegally.
Good for us if it's so. I'm just concerned about the bugs existed for long enough for the exploit to be relevant and still haunting some diplomats' phones, despite so many publications. And many people discussing legal and political (doomed to be inefficient IMHO) rather than technical solutions.
A lot of these issues will not be solvable technically, though. Given the way systems/software is built currently, vulnerabilities are going to exist and there are very highly paid and motivated people working on figuring out how to exploit them. It also does not help that mobile devices are in use for much longer than they receive security updates etc.
I am not saying that these issues can/will be solved by legal and political means (especially given that it is not restricted to a single country), but it seems rather unlikely that these issues can be solved by pure technical means at the current point.
This revelation comes from a security investigation that the Finnish government started in autumn 2021. So when the exploit was discovered, they checked to see if they were hit and hopefully too actions to protect themselves if they were.
Patching won't help, if you are diplomat, that has to use communication towers in a country, that is spying on you. Also, phone OS all allow incoming of sms, that are not visible - because that is how they are built. Those messages are there for technical reasons and that is also making them easy to exploit.
Also, Israeli government simply can't forbid their companies to do, what US companies are not forbidden to do, because that option is only available to totalitarian states.
Patching is not an issue here, but your ability to take your own(and if you are a really lucky - then others) government by balls and squeeze hard, if they do this stuff. If you do not have ability to get government by balls, then government is squeezing your balls already.
I think it would be good hygiene to completely reset phones every year or half-year. Now, if it were common practice, I guess the exploits would get around that too (or they already do?)
I'm curious how many of the known exploits today would persist past a "restore from backup" via iCloud. To be truly successful, would you need to take no history from your pre-reset phone forward?
When are governments going to finally realize that voice calls, text, and maybe plaintext email are enough for work phones?
Is playing Candy Crush on your work phone really a mission-critical cyberpriority?
All the high-security executive/legislative people (at least in the US) have two phones: the personal phone and the work phone. Whoever made the decision for the work phones to be "smart" needs to be fired. The old dumb Blackberries could easily be resurrected if a government-sized buyer committed to a purchase.
Because if I'm a high-ranking diplomat or military official, I may need to get plane tickets, book an Uber and hotel, set up a meeting on whatsapp, and read a news article at the drop of a hat
I'm sure they want to be able to share pictures, video and use Powerpoint too... there is a published of commercial mobile devices certified for classified use at https://www.nsa.gov/Resources/Commercial-Solutions-for-Class..., (looks like they mostly have Samsung Galaxies) .
Is there some overview about what phones governments use in high-securitiy contexts? It would be interesting to see what they consider secure, since that's probably informed by their own capabilities. Last time I checked Obama was using a BlackBerry.
No, they use zero-day exploits in common media formats. The spy sends you a message containing an image or pdf, your device parses it, is exploited, and then removes the message, before there ever is a notification about it. You will never know that it ever happened.
I can't believe that all text messages aren't stored somewhere on the NSA (or equiv.) server (so it should be easy to quickly find the zero-day after a single attack). They probably just aren't motivated enough to expose the zero-days associated with it.
Do you really think that the NSA would be bound by sanctions? They’ll ping the Israeli government and ask for access if they need it and they won’t be turned down it would be just a matter of price.
The NSO isn’t a state run outfit outright but it has been used by Israel to score foreign relationship wins just like any other export and specifically arms export are used by other governments.
NSO is literarily the bargain bin option when it comes to SIGINT/COMINT, and for most of their clients they are pretty much the only option to get a high end targeted capability to compromise mobile devices.
iPhones still can send and receive an SMS, it’s also not particularly difficult to send an crafted iMessage, a lot of these exploits also chain multiple exploits so an RCE in a 3rd party messaging app with a sandbox/privesc on the local device.
And even without that if you get an RCE within the context of a messaging app you might be able to get most of what you need since you probably would be able to read / write arbitrary memory within the context of that process and interact with which ever APIs the app has permissions for which for messaging almost always includes microphone and camera and often location too.
The only thing you don’t get from running an exploit within the context of a single app is usually persistence but if your exploit can survive the app being suspended then as most people rarely reboot their phones you can get pretty long lived sessions too.
Contrary to popular belief, iPhones and Android phones have really poor security and new exploits are discovered all the time.
So a properly formatted text message is all that's required these days.
It's like in the dotcom days when 90% of the web was open to SQL injection.
You are right in that these attacks takes more skills or a little bit of money, so in that regard it's not the same.
But in multiple ways I think it's the same; like that it's obvious that security is still not a priority when building the software and that you as a user have to assume that the platforms are compromised.
No it’s not I don’t think you realize the skill gap.
There is no SQLmap for iPhones and a “Metasploit” for iPhones costs 10’s of millions and requires you to be able to negotiation contracts on a state level…
The amount of money and skill that is require to identify these vulnerabilities and develop them into functional exploits is pretty insane.
It goes well beyond what even basic RCE due to say unsafe deserialization in Java requires.
Anyone without any knowledge in programming could probably learn how to identify and exploit a SQL injection even without automated tools within days if not hours.
On the other hand even experienced developers look at something like FORCEDENTRY and can barely comprehend it.
Any reasonably complex piece of software will have vulnerabilities. In other words, vulnerabilities are not a variable for the security equation, they are a constant. When designing something, vulnerabilities will exist. Generally, vulnerabilities, on their own, are not a great indication of how security is prioritized internally in any company.
When security researchers report SERIOUS security bugs to the manufacturers, as happened again and again the last years, without them acknowledging or fixing them for many months then I think it's safe to say they don't really care about security.
You can go and talk about complex software and that vulnerabilities will always exists how much you want, but there is no excuse for these big companies to not fix major bugs like this within a week from when it's been reported. I don't care if that means that the developers have to postpone their fancy AI face recognition feature that will make your face look like an emoji. NO EXCUSES.
I actually wish NSO Group would sell their spying software to absolutely anyone willing to pay them money for it.
I’m not okay with anyone at all having it, so maybe if everyone could have it, the industry would have to get their shit together and actually patch the exploits.
Did you read about the ForcedEntry exploit? They implemented basically an entire emulator inside of one pass of an obscure PDF compression algorithm. It’s perhaps the most complicated hack I have ever seen by an order of magnitude at least.
Meanwhile, they take 30% cut from developers and force everyone to buy a new phone every year. Microsoft monopolization of Windows is a child play in comparison to this phone racket.
My iPhone 7 is a handmedown that I got 3 years ago. I see no reason to upgrade until it A) dies or B) stops receiving security updates, at which point I’ll probably just get another handmedown from someone who likes new things more than I do.
When hackers use such exploits, do they then basically have something like remote control over your phone, and can start exfiltrating data / manipulating apps while you're not paying attention? Or what do they do with it then? Are you able to tell, by seeing your phone slow down or start to have unexplained screen behaviors? Suppose that I'm logged into my Google apps on my phone, does that mean they have access to all my gmail, google docs, etc. as well?
Do these kinds of exploits also exist/get used to target people on their laptops and desktop machines as well? Or is that a little less likely, since your phone is specifically identified with you and people can easily go after your known phone number?
I wonder if there is some resource where people can read about how to detect and avoid such exploits and protect against them?