Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.
They are overwhelmingly whitelabeled providers. For example, Samsung SDI Cloud (the largest "Korean" cloud) is an AWS white label.
Korea is great at a lot of engineering disciplines. Sadly, software is not one of them, though it's slowly changing. There was a similar issue a couple years ago where the government's internal intranet was down a couple days because someone deployed a switch in front of outbound connections without anyone noticing.
It's not a talent problem but a management problem - similar to Japan's issues, which is unsurprising as Korean institutions and organizations are heavily based on Japanese ones from back in the JETRO era.
I spent a week of my life at a major insurance company in Seoul once, and the military style security, the obsession with corporate espionage, when all they were working on was an internal corporate portal for an insurance company… The developers had to use machines with no Internet access, I wasn’t allowed to bring my laptop with me lest I use it to steal their precious code. A South Korean colleague told me it was this way because South Korean corporate management is stuffed full of ex-military officers who take the attitudes they get from defending against the North with them into the corporate world; no wonder the project was having so many technical problems-but I couldn’t really solve them, because ultimately the problems weren’t really technical
> South Korean corporate management is stuffed full of ex-military officers
For those unaware, all "able-bodied" South Korean men are required to do about two years of military service. This sentence doesn't do much for me. Also, please remember that Germany also had required military service until quite recently. That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.
The way it was explained to me was different... yes, all able-bodied males do national service. But there's a different phenomenon in which someone serves some years active duty (so this is not their mandatory national service, this is voluntary active duty service), in some relatively prestigious position, and then jumps ship to the corporate world, and they get hired as an executive by their ex-comrades/ex-superiors... so there ends up being a pipeline from more senior volunteer active duty military ranks into corporate executive ranks (especially at large and prestigious firms), and of course that produces a certain culture, which then tends to flow downhill
Also Israel - and their tech echo system is tier 1.
As somebody that has also done work in Korea (with on of their banks), my observation was that almost all decision making was top-down, and people were forced to do a ton of monotonous work based on the whims of upper management, and people below could not talk back. I literally stood and watched a director walk in after racking a bunch of equipment and commented that the disk arrays should be higher up. When I asked why (they were at the bottom for weight and centre of gravity reasons), he looked shocked that I even asked and tersely said that the blinking lights of the disks at eye level show the value of the purchase better.
I can't imagine writing software in that kind of environment. It'd be almost impossible to do clean work, and even if you did it'd get interfered with. On top of that nobody could go home before the boss.
I did enjoy the fact that the younger Koreans we were working with asked me and my colleague how old we were, because my colleague was 10 years older than me and they were flabbergasted that I was not deferring to him in every conversation, even though we were both equals professionally.
This was circa 2010, so maybe things are better, but oh my god I'm glad it was business trips and I was happy to be flying home each time (though my mouth still waters at the marinaded beef at the bbq restaurants I went to...).
> That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.
Absolutely not. It was very common in Germany to deny military service and instead do a year of civil service as a replacement. Also, there were several exceptions from the """mandatory""" military service. I have two brothers who had served, so all I did was tick a checkbox and I was done with the topic of military service.
Depends on if these were commissioned officers or NCOs. Basically everyone reaches NCO by the end of service (used to be automatic, now there are tests that are primarily based around fitness), but when people specifically call out officers they tend to be talking about ones with a commission. You are not becoming a commissioned officer through compulsory service.
This - you and half of the smart people here in the comments clearly have no idea what it's like to live across the border from a country that wants you eradicated.
I've done some work for a large SK company and the security was manageable. Certainly higher than anything I've seen before or after and with security theater aspects, but ultimately it didn't seriously get in the way of getting work done.
I think it makes sense that although this is a widespread problem in South Korea, some places have it worse than others; you obviously worked at a place where the problem was more moderate. And I went there over a decade ago, and maybe even the place I was at has lightened up a bit since.
That doesn't seem accurate at all. The big 3 Korean clouds used inside Korea are NHN Cloud, Naver Cloud and now KT. Which one of these is whitelabeled? And what's the source on Samsung SDI Cloud being the "largest Korean cloud"? What metric?
NHN Cloud is in fact being used more and more in the government [1], as well as playing a big part in the recovery effort of this fire. [2]
No, unlike what you're suggesting, Korea has plenty of independent domestic cloud and the government has been adopting it more and more. It's not on the level of China, Russia or obviously the US, but it's very much there and accelerating quickly. Incomparable to places like the EU which still have almost nothing.
I am very happy with the software that powers my Hyundai Tuscon hybrid. (It's a massive system that runs the gas and electric engines, recharging,
shifting gears, braking, object detection, and a host of information and entertainment systems.) After 2 years, 0 crashes and no observable errors. Of course, nothing is perfect: maps suck. The navigation is fine; it's the display that is at least 2 decades behind the times.
I've been working for a Korean Hyundai supplier for two years training them in modern software development processes. The programming part is not a problem, they have a lot of talented people.
The big problem from my point of view is management. Everyone pushes responsibility and work all the way down to the developera so that they do basically everything themselves from negotiating with the customer, writing the requirements (or not) to designing the architecture, writing the code and testing the system.
If they're late,they just stay and work longer and on the weekends and sleep at the desk.
If the dev does everything, their manager may as well be put in a basket and pushed down the river. You can be certain there are a lot of managers. The entire storyline sounds like enterprise illness to me to be honest.
I’ve driven a Tucson several times recently (rental). It did not crash but it was below acceptable. A 15 year old VW Golf has better handling than the Tucson.
> Korea is great at a lot of engineering disciplines. Sadly, software is not one of them
I disagree. People say the same about Japan and Taiwan (and Germany). IMHO, they are overlooking the incredible talents in embedded programming. Think of all of the electronics (including automobiles) produced in those countries.
What about automobiles from Japan, Korea, and Germany? They are world class. All modern cars must have millions of lines of code to run all kinds of embedded electronics. Do I misunderstand?
The last time I heard of Joyent was in the mid-2000s on John Gruber’s blog when it was something like a husband-and-wife operation and something to do with WordPress or MovableType - 20 years later now it’s a division of Samsung?
In the meantime, they sponsored development of node in its early days, created a could infrastructure based on OpenSolaris and eventually got acquired by Samsung.
Others have pointed out: you need uptime too. So a single data center on the same electric grid or geographic fault zone wouldn’t really cut it. This is one of those times where it sucks to be a small country (geographically).
> so a single data center on the same electrical grid or geographic...
Yes, but your backup DC's can have diesel generators and a few weeks of on-site fuel. It has some quakes - but quake-resistant DC's exist, and SK is big enough to site 3 DC's at the corners of an equilateral triangle with 250km edges. Similar for typhoons. Invading NK armies and nuclear missiles are tougher problems - but having more geography would be of pretty limited use against those.
They fucked up, that much is clear but the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here.
> the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here
Doesn't have to be an American provider (Though anyone else probably increases Seoul's security cross section. America is already its security guarantor, with tens of thousands of troops stationed in Korea.)
And doesn't have to be permanent. Ship encrypted copies to S3 while you get your hardenede-bunker domestic option constructed. Still beats the mess that's about to come for South Korea's population.
I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee. They simply should have made local and off-line backups, that's the gold standard, and to ensure that those backups are complete and can be used to restore from scratch to a complete working service.
>I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee.
Permanently losing data at a given store point isn't relevant to losing data overall. Data store failures are assumed or else there'd be no point in backups. What matters is whether failures in multiple points happen at the same time, which means a major issue is whether "independent" repositories are actually truly independent or whether (and to what extent) they have some degree of correlation. Using one or more completely unique systems done by someone else entirely is a pretty darn good way to bury accidental correlations with your own stuff, including human factors like the same tech people making the same sorts of mistakes or reusing the same components (software, hardware or both). For government that also includes political factors (like any push towards using purely domestic components).
>They simply should have made local and off-line backups
FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.
> Permanently losing data at a given store point isn't relevant to losing data overall.
I can't reveal any details but it was a lot more than just a given storage point. The interesting thing is that there were multiple points along the way where the damage would have been recoverable but their absolute incompetence made matters much worse to the point where there were no options left.
> FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.
If you can't do the job you should get out of the kitchen.
>I can't reveal any details but it was a lot more than just a given storage point
Sorry, not brain not really clicking tonight and used lazy imprecise terminology here, been a long one. But what I meant by "store point" was any single data repository that can be interacted with as a unit, regardless of implementation details, that's part of a holistic data storage strategy. So in this case the entirety of IBM would be a "storage point", and then your own self-hosted system would be another, and if you also had data replicated to AWS etc those would be others. IBM (or any other cloud storage provider operating in this role) effectively might as well simply be another hard drive. A very big, complex and pricey magic hard drive that can scale its own storage and performance on demand granted, but still a "hard drive".
And hard drives fail, and that's ok. Regardless of the internal details of how the IBM-HDD ended up failing, the only way it'd affect the overall data is if that failure happened simultaneously with enough other failures at local-HDD and AWD-HDD and rsync.net-HDD and GC-HDD etc etc that it exceeded available parity to rebuild. If these are all mirrors, then only simultaneous failure of every single last one of them would do it. It's fine for every single last one of them to fail... just separately, with enough of a time delta between each one that the data can be rebuilt on another.
>If you can't do the job you should get out of the kitchen.
Isn't that precisely what bringing in external entities as part of your infrastructure strategy is? You're not cooking in their kitchen.
Ah ok, clear. Thank you for the clarification. Some more interesting details: the initial fault was triggered by a test of a fire suppression system, that would have been recoverable. But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one, more so when they found out that their backups were incomplete. I still wonder if they ever did RCA/PM on this and what their lessons learned were. It should be a book sized document given how much went wrong. I got the call after their own efforts failed by one of their customers and after hearing them out I figured this is not worth my time because it just isn't going to work.
Thanks in turn for the details, always fascinating (and useful for lessons... even if not always for the party in question dohoho) to hear a touch of inside baseball on that kind of incident.
>But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one
The sentence "and that's when a small problem became a big problem" comes up depressingly frequently in these sorts of post mortems :(. Sometimes sort of feels like, along all the checklists and training and practice and so on, there should also simply be the old Hitchhiker's Guide "Don't Panic!" sprinkled liberally around along with a dabbing of red/orange "...and Don't Be Clever" right after it. We're operating in alternate/direct law here folks, regular assumptions may not hold. Hit the emergency stop button and take a breath.
But of course management and incentive structures play a role in that too.
In this context the entirety of IBM cloud is basically a single storage point.
(If IBM was also running the local storage then we're talking about a very different risk profile from "run your own storage, back up to a cloud" and the anecdote is worth noting but not directly relevant.)
If that’s the case, then they should make it clear they don’t provide data backup.
A quick search reveals IBM does still sell backup solutions, including ones that backup from multiple cloud locations and can restore to multiple distinct cloud locations while maintaining high availability.
So, if the claims are true, then IBM screwed up badly.
DO Spaces, for at least a year after launch, had no durability guarantees whatsoever. Perhaps they do now, but I wouldn’t compare DO in any meaningful way to S3, which has crazy high durability guarantees as well as competent engineering effort expended on designing and validating that durability.
They should have kept encrypted data somewhere else. If they know how to use encryption, it doesn’t matter where. Some people use stenographic backup on YouTube even.
There's certifications too, which you don't get unless you conform to for example EU data protection laws. On paper anyway. But these have opened up Amazon and Azure to e.g. Dutch government agencies, the tax office will be migrating to Office365 for example.
Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center
Microsoft has already testified that the American government maintains access to their data centres, in all regions. It likely applies to all American cloud companies.
America is not a stable ally, and has a history of spying on friends.
So unless the whole of your backup is encrypted offline, and you trust the NSA to never break the encryption you chose, its a national security risk.
> France spies on the US just as the US spies on France, the former head of France’s counter-espionage and counter-terrorism agency said Friday, commenting on reports that the US National Security Agency (NSA) recorded millions of French telephone calls.
> Bernard Squarcini, head of the Direction Centrale du Renseignement Intérieur (DCRI) intelligence service until last year, told French daily Le Figaro he was “astonished” when Prime Minister Jean-Marc Ayrault said he was "deeply shocked" by the claims.
> “I am amazed by such disconcerting naiveté,” he said in the interview. “You’d almost think our politicians don’t bother to read the reports they get from the intelligence services.”
> “The French intelligence services know full well that all countries, whether or not they are allies in the fight against terrorism, spy on each other all the time,” he said.
> “The Americans spy on French commercial and industrial interests, and we do the same to them because it’s in the national interest to protect our companies.”
> “There was nothing of any real surprise in this report,” he added. “No one is fooled.”
> I always thought it was a little unusual that the state of France owns over 25% of the defense and cyber security company Thales.
Unusual from an American perspective, maybe. The French state has stakes in many companies, particularly in critical markets that affect national sovereignty and security, such as defence or energy. There is a government agency to manage this: https://en.wikipedia.org/wiki/Agence_des_participations_de_l... .
> America is not a stable ally, and has a history of spying on friends
America is a shitty ally for many reasons. But spying on allies isn’t one of them. Allies spy on allies to verify they’re still allies. This has been done throughout history and is basic competency in statecraft.
That doesn’t capture the full truth. Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike with the purpose of harvesting data and gathering intelligence, not just to verify their loyalty.
No nation should trust the USA, especially not with their state secrets, if they can help it. Not that other countries are inherently more trustworthy, but the US is a known bad actor.
> Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike
We also know this is also true for Russia, China and India. Being spied on is part of the cost of relying on external security guarantees.
> Not that other countries are inherently more trustworthy, but the US is a known bad actor
All regional and global powers are known bad actors. That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.
> All regional and global powers are known bad actors.
That they are. Americans tend to view themselves as "the good guys" however, which is a wrong observation and thus needs pointing out in particular.
> That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.
If they have any secrets they attempt to keep even from Washington, they are contained in these backups. If that is the case, storing them (even encrypted) with an American company absolutely compromises security, even if there is no known threat vector at this time. The moment you give up control of your data, it will forever be subject to new threats discovered afterward. And that may just be something like observing the data volume after an event occurs that might give something away.
> The raid led to a diplomatic dispute between the United States and South Korea, with over 300 Koreans detained, and increased concerns about foreign companies investing in the United States.
There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.
You really have no evidence to back up your assertion, because you’d have to be an insider.
> There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.
Perhaps is doing a lot of work here. They do, and they are. That is what the Snowden leaks proved.
> You really have no evidence to back up your assertion, because you’d have to be an insider.
I don't, because the possibility alone warrants the additional caution.
DES is an example of where people were sure that NSA persuaded IBM to weaken it but, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES". <https://www.cnet.com/news/privacy/saluting-the-data-encrypti...>
ed25519 (and ec25519) are generally understood not to be backdoored by the NSA, or weak in any known sense.
The lack of a backdoor can be proven by choosing parameters according to straightforward reasons that do not allow the possibility for the chooser to insert a backdoor. The curve25519 parameters have good reasons why they are chosen. By contrast, Dual_EC_DRBG contains two random-looking numbers, which the NSA pinky-swears were completely random, but actually they generated them using a private key that only the NSA knows. Since the NSA got to choose any numbers to fit there, they could do that. When something is, like, "the greatest prime number less than 2^255" you can't just insert the public key of your private key into that slot because the chance the NSA can generate a private key whose public key just happens to match the greatest prime number less than 2^255 is zero. These are called "nothing up my sleeve numbers".
This doesn't prove the algorithm isn't just plain old weak, but nobody's been able to break it, either. Or find any reason why it would be breakable. Elliptic curves being unbreakable rests on the discrete logarithm of a random-looking permutation being impossible to efficiently solve, in a similar way to how RSA being unbreakable relies on nobody being able to efficiently factorize very big numbers. The best known algorithms for solving discrete logarithm require O(sqrt(n)) time, so you get half the bits of security as the length of the numbers involved; a 256-bit curve offers 128 bits of security, which is generally considered sufficient.
(Unlike RSA, you can't just arbitrarily increase the bit length but have to choose a completely new curve for each bit length, unfortunately. ed25519 will always be 255 bits, and if a different length is needed, it'll be similar but called something else. On the other hand, that makes it very easy to standardize.)
Absence of evidence is not evidence of absence. It could well be that someone has been able to break it but that they or that organization did not publish.
How could you not!? Think of the bragging rights. Or, perhaps the havoc. That persons could sit on this secret for long periods of time seem... difficult to maintain. If you know it's broken and you've discovered it; surely someone else could too. And they've also kept the secret?
I agree on the evidence/absence of conjecture. However, the impact of the secret feels impossible to keep.
Time will, of course, tell; it wouldn't be the first occasion where that has embarrassed me.
Some people are able to shut the hell up. If you're not one of them, you're not getting told. Some people can keep a secret. Some people can't. Others get shot. Warframe is a hilarious example where people can't shut the hell up about things they know they should keep quiet about.
Large amounts of data, like backups, are encrypted using a symmetric algorithm. Which makes the strength of Ed25519 somewhat unimportant in this context.
There are no stable allies. No country spies on its friends because countries don't have friends, they have allies. And everybody spies on their allies.
Like, don't store it in the cloud of an enemy country of course.
But if it's encrypted and you're keeping a live backup in a second country with a second company, ideally with a different geopolitical alignment, I don't see the problem.
you are seeing the local storage decision under the lens of security, that is not the real reason for this type of decision.
While it may have been sold that way, reality is more likely the local DC companies just lobbied for it to be kept local and cut as many corners as they needed. Both the fire and architecture show they did cut deeply.
Now why would a local company voluntary cut down its share of the pie by suggesting to backup store in a foreign country. They are going to suggest keep in country or worse as was done here literally the same facility and save/make even more !
The civil service would also prefer everything local either for nationalistic /economic reasons or if corrupt then for all kick backs each step of the way, first for the contract, next for the building permits, utilities and so on.
There are a lot of gray relations out there, but there’s almost no way you could morph the current US/SK relations to one of hostility; beyond a negligible minority of citizens in either being super vocal for some perceived slights.
You think when ICE arrested over 300 South Korean citizens who were setting up a Georgia Hyundai plant and subjected them to alleged human rights abuses, it was only a perceived slight?
'The raid "will do lasting damage to America's credibility," John Delury, a senior fellow at the Asia Society think tank, told Bloomberg. "How can a government that treats Koreans this way be relied upon as an 'ironclad' ally in a crisis?"'
Trump will find a way, just as he did with Canada for example (i mean, Canada of all places). Things are way more in flux than they used to be. There’s no stability anymore.
From the perspective of securing your data, what's the practical difference between a second country and an enemy country? None. Even if it's encrypted data, all encryption can be broken, and so we must assume it will be broken. Sensitive data shouldn't touch outside systems, period, no matter what encryption.
A statement like "all encryption can be broken" is about as useful as "all systems can be hacked" in which case, not putting data in the cloud isn't really a useful argument.
Any even remotely proper symmetric encryption scheme "can be broken" but only if you have a theoretical adversary with nearly infinite power and time, which is in practice absolutely utterly impossible.
I'm sure cryptographers would love to know what makes it possible for you to assume that say AES-256 or AES-512 can be broken in practice for you to include it in your risk assessment.
You’re assuming we don’t get better at building faster computers and decryption techniques. If an adversary gets hold of your encrypted data now, they can just shelf it until cracking becomes eventually possible in a few decades. And as we’re talking about literal state secrets here, they may very well still be valuable by then.
Barring any theoretical breakthroughs, AES can't be broken any time soon even if you turned every atom in the universe into a computer and had them all cracking all the time. There was a paper that does the math.
You make an incorrect assumption about my assumptions. Faster computers or decryption techniques will never fundamentally "break" symmetric encryption. There's no discrete logarithm or factorization problem to speed up. Someone might find ways to make for example AES key recovery somewhat faster, but the margin of safety in those cases is still incredibly vast. In the end there's such an unfathomably vast key space to search through.
You're also assuming nobody finds a fundamental flaw in AES that allows data to be decrypted without knowing the key and much faster than brute force. It's pretty likely there isn't one, but a tiny probability multiplied by a massive impact can still land on the side of "don't do it".
I'm not. It's just that the math behind AES is very fundamental and incredibly solid compared to a lot of other (asymmetric) cryptographic schemes in use today. Calling the chances of it tiny instead of nearly nonexistent sabotages almost all risk assessments. Especially if it then overshadows other parts of that assessment (like data loss). Even if someone found "new math" and it takes very optimistically 60 years, of what value is that data then? It's not an useful risk assessment if you assess it over infinite time.
But you could also go with something like OTP and then it's actually fundamentally unbreakable. If the data truly is that important, surely double the storage cost would also be worth it.
Overnight, Canada went from being an ally of the US to being threatened by annexation (and target #1 of an economic war).
If the US wants its state-puppet corporations to be used for integral infrastructure by foreign governments, it's going to need to provide some better legal assurances than 'trust me bro'.
(Some laws on the books, and a congress and a SCOTUS that has demonstrated a willingness to enforce those laws against a rogue executive would be a good start.)
Then they didn't have a correct backup to begin with; for high profile organizations like that, they need to practice outages and data recovery as routine.
...in an ideal world anyway, in practice I've never seen a disaster recovery training. I've had fire drills plenty of times though.
We’ve had Byzantine crypto key solutions since at least 2007 when I was evaluating one for code signing for commercial airplanes. You could put an access key on k:n smart cards, so that you could extract it from one piece of hardware to put on another, or you could put the actual key on the cards so burning down the data center only lost you the key if you locked half the card holders in before setting it on fire.
Rendering security concepts in hardware always adds a new set of concerns. Which Shamir spent a considerable part of his later career testing and documenting. If you look at side channel attacks you will find his name in the author lists quite frequently.