The best start you can get at a system that you fully understand and know isn't subverted. Then, once you know HW design, you can take the next step by putting it on your own ASIC. Matter of fact, Wirth's early systems ran well on old tech. Can use a pretty old process node. :)
How do you know the thing which takes your design and "builds" (i.e. machine which places physical components) or "compiles" (i.e. the thing which moves bits into an fpga) things isn't subverted?
I had the same thought. I think the hardest part could be "diffing" the outputs, though. Is some particular difference within expected parameters, or can you really detect that hidden bit of metal on layer 5 of the board? Interestingly I stumbled upon this: http://www.sandia.gov/mstc/fabrication/index.html
That's not going to happen because the two toolchains won't be anything alike. You won't be able to make them alike either. Worse, that people often quote Thompson's attack shows that INFOSEC teaches subversion very poorly: it's the least likely attack to affect you and you really need to counter the others. What you have to do is make the software correct, make it secure, and ensure no subversion in lifecycle. That's called high assurance or robustness system design. Links below show you how to do that.
By using the open source toolchains, of course. See http://opencircuitdesign.com/qflow/ - it's capable of producing the OSU 0.35um output, ready for a fab. And with such a node you can inspect the result with nearly a bare eye.
I've been wanting to know what they've actually fabbed rather than merely synthesized. What link do you have on them doing an ASIC-proven design at 0.35um? Not just in theory but actually doing it and the chip worked.
Far as getting OSS flows lower, I've been collecting methods for doing that. Physical [re-]synthesis is the next step for 130nm and 90nm. A few other tricks they use. I've collected a few papers on methods of doing as many as possible so academics can reimplement them for Yosys, Odin, ABC, etc in the future. It's critical that we get open flows for 90nm. We can do 180nm first as it's still highly useful, esp for custom. However, 90nm is better for standard cell and flash memory.
My mindset is that we can combine proprietary tools and open tools in a hybrid approach. Let the proprietary tools catch all the errors, guide re-synthesis, etc. So long as they can be made to produce a series of traces for synthesis and test vectors that open-source tools can check. One or more EDA vendors might go for this as it keeps their secrets secret and profits high while letting us do much easier work that OSS tools are already close to handling. I mean, we still strive for open EDA in long term but use this for interim. What you think?
> Early versions of the qflow digital flow were used to create digital circuits used in high-performance commercial integrated circuits. It's real, and it works!
I think you're right, OSS can coexist with the proprietary cell libraries and P&R tools. I am (optimistically) expecting that in the near future we'll witness a revolution in vlsi similar to what happened to the PCB fabrication - with the fabs becoming accessible to small companies and even hobbyists. And this should kick start an improvement in both the OSS tools ecosystem and the current closed, secretive world of Cadence, Synopsys and alike.
You really should publish, as security through obscurity only hurts us! Out in the open, there is a chance that some measures will be taken. Concealment is a guarantee that bad guys will be doing bad things quietly.
That's a mainstream mantra but not true at all against High Strength Attackers (HSA's). They always have more resources than you do. They more they know, the more they can focus. That's why obfuscation on top of proven security techniques is the best strategy for use against HSA's. Gotta give them nothing (or false things) to aim at then have good detection or recovery measures. Proven strategy.
Alternatively, I can publish my model. They can immediately move to counter the very specific things I do. Before you know it, they have a turn-key solution for compromising my devices individually or in bulk. Anyone using it is suddenly less safe. Knowing specifics will also benefit their stealth: less detection capability.
Rather not publish that at least until I've developed something better. ;)
> That's a mainstream mantra but not true at all against High Strength Attackers (HSA's). They always have more resources than you do.
That's why obscurity doesn't work. An attacker with many resources can reverse engineer whatever concealment you've concocted but the concealment is enough to deter other researchers who would have published their results from analyzing your system. So then the HSA knows of the attack and can take advantage of it but you don't so you can't counter it.
Nonsense. The attacker must expend significant effort to identify both the product- and user-specific obfuscations. That by itself does two things: (a) reduce number that can afford to attack you; (b) possibly force individual attacks instead of one size fits all. On top of this, there's a range of monitoring and tamper-resistance measures that can be employed.
So, the result of such obfuscations plus strong, proven methods stops most attackers outright, helps catch some, and at least slows/limits others.
Compare that to a situation where every detail is known and TCB (esp hardware & kernel) are same for all users. Find flaw easily, compromise any. That's your model. See the NSA TAO catalog and any reporting of mass hacks to see where that led. Your method simply doesnt work and is defeated even by average attackers. Whereas mine has held off many in practice with the few failures due to over-privileged, malicious insiders. Learn from the evidence. Apply strong methods, obfuscation, diversity, and detection in combination. Or High Strength Attackers own you without breaking a sweat.
Your argument is upside down. Unraveling obfuscation costs time and money, which means it may be able to deter amateurs (though the history of DRM implies it can't even do that). State-level attackers have nearly unlimited resources. You can't defeat them by costing them money because their money comes out of your paycheck. The only way to defend against them, if you have the hubris to think you're even capable of it, is to find and remove the vulnerabilities so there is nothing for them to exploit. Which makes obfuscation counterproductive because it makes things harder for whitehats who would otherwise help you.
And diversity and obscurity are separate things so the diversity argument is a non-sequitur.
Ok, let's put your method to the test with a simple use case. There's two protocols for secure messaging:
1. The popular one that's been battle-tested, is written in C, and uses one algorithm in a specific way.
2. My method which starts with that protocol, applies a technique for protecting C code, uses 2-3 of AES candidates, has a different initial counter value, and a port-knocking scheme. Each of these are unknown and randomized on a per user-group basis.
You have an encryption or protocol flaw. Your goal is to intercept these users' communications. You do not have an attack on their endpoints. Which of (1) or (2) is harder? Your argument already fails at this point. Let's continue, though.
You first have to figure out the portknocking scheme. That means you must identify which part of the packet headers or data are used to do it plus how its done plus attack how its done. This is already a ridiculously hard problem evidenced by the fact that nobody's ever bypassed it when I used it: always went straight for endpoints or social engineering attempts. Using a deniable strategy like SILENTKNOCK can make it provably impossible for them to even know it's in use.
Next are the ciphers. They don't know which one is in use. The non-security-critical values fed into it are randomized in such a way that they might have to try more possibilities than are in the universe just to start the attack. A weakness in any one algorithm won't help them. Unlike most SW people, I also address covert channels: no leaks to help them. What are odds your attack will work on my users?
Next is the protocol engine. This is where they have a solid chance of screwing me up because mainstream loves dangerous languages, OS's, and ISA's. However, there are dozens of ways to obfuscate/immunize C code against most likely attacks and safe system languages I can reimplement it in while checking assembler. Further, if I make it static & a FSM, I can use tools like Astree for C or SPARK Ada to prove it free of errors. Without the binary and w/ limited online attacks, these guys would have to be geniuses to find an attack on it. After they beat the port-knocking scheme that was designed similarly...
The use of provably strong mechanisms in layers is in NSA's own evaluation criteria as the kind of thing that stops nation-states. And stopped NSA's own red teams in the past during A1/EAL6/EAL7 evaluations. They depend on it for most critical stuff. Applying sound principles plus things easy for defender but exponentially harder for attacker makes a fence so high even nation-states have trouble jumping over it. They're forced to do one of several things: use very best attacks that might be lost during a detection; get physical, increasing odds of detection; ignore you to attack someone else for reasons of cost-effectiveness and keeping their best stuff stealthy. They usually do the latter that I see, even NSA per Snowden leaks.
Of course, I'm sure you and others on your side will continue handing them your source, protocols, exact configurations, and so on to enable them to smash it. I'm sure you'll use the reference implementation without any changes so one attack affects you along with hundreds of thousands to millions.
However, anyone that wants enemies to work for their system should take my approach as it provably increases their difficulty so much that almost all of them will leave you alone or try other attack vectors. This strategy should be applied from ISA to OS to client & server software. And if you have holistic security, you'll prevent or eventually detect most of their other attacks, too. Worst case, one or two of the elite groups get in while you're still safe from the others (esp most damaging). Still better than the "give it all to all of them" approach to security you're endorsing.
The entirety of your argument is that it's a good idea to use code or protocol secrecy as a type of secret password. All the rest is a list of non-sequiturs; you can use port knocking or Ada and still publish the code -- the SILENTKNOCK code is public and port knocking is just a mechanism for password authentication. But this kind of security is already provided by shared secret authentication methods that are stronger and better designed than ad hoc protocol secrecy.
The most common (and quite valid) critique of secret custom protocols is that each change you've made from the well-tested common version is an opportunity for you to have made an exploitable mistake. In the event the attacker does learn the custom protocol that is clearly a liability. But claiming the protocol itself as a secret is worse than that. If one client using a secret password is compromised then you have to change the password, but if one client using a secret protocol is compromised then you have to change the protocol.
Even if no client is ever compromised, the general form of your argument is X + Y > X. So TLS + secret alterations is more secure than TLS. But it's a false dichotomy. A secret protocol is an unnecessary liability over other, better security mechanisms. TLS + independent Secure Remote Password is more secure than TLS + secret alterations. And the list of sensible things you can layer together is long: If you're sufficiently paranoid you can use TLS over ssh over IPSec etc. etc. The point at which you run out of good published security layers is so far past the point or practicality that there is no reason to ever suffer the liabilities of protocol secrecy.
"The entirety of your argument is that it's a good idea to use code or protocol secrecy as a type of secret password."
You're getting close to understanding but still not there. Obfuscation is the use of secrecy in code, protocol, configuration, etc to increase knowledge or access required for an attack. What you wrote is partly there but some secrets (eg memory-safety scheme) require more than disclosure to break. Also, what I've always called Security via Diversity claims that everyone relying on a few protocols or implementations means each attack on them automatically can hit huge numbers of users. Therefore, each should be using something different in a way to cause further work or reduce odds of sharing a vulnerability. The difference can be open or obfuscated.
"All the rest is a list of non-sequiturs"
Then followed followed with claims showing that this makes mass attacks nearly impossible with targeted attacks extremely difficult and with a custom attack required per target. Even getting started on an attack practically requires they've already succeeded in others. You dismiss this as a non-sequitur, which makes no sense. I've substantiated in very specific detail how my method provides great protection vs totally-open, vanilla, standardized-for-all methods. To avoid mere security by obscurity, my methods still utilize and compose the best of openly-vetted methods following their authors' guidelines to avoid loss of security properties.
I also noted this has worked in practice per our monitoring with our systems merely crashing or raising exceptions due to significant vulnerabilities that simply didn't work. Success via obfsucation + proven methods was reported by many others including famed Grimes [1] who opposes "security by obscurity" in most writings. Despite arguing against me for decade plus, computer science has started coming around to the idea with many techs published with DARPA funding, etc under the banner "moving target" that try to make each system different with some having mathematical arguments about security they provide. Most are obfuscations at compiler, protocol, or processor level. (Sound familiar?) That field was largely result of people doing what your side said resulting in attackers with hundreds to thousands of 0-days defeating such software with ease with fire-and-forget, kit-based attacks.
So, the evidence is on my side. In practice, obfuscating other otherwise proven tech in good configurations greatly benefits security even against pro's. I gave clear arguments & examples that it makes attackers' job more difficult, requires they have inside access, and forces customized attacks instead of mass fire-and-forget kits. On other hand, your counter shows little understanding of what the field has accomplished on defensive side or the economics of malware development on their side. Plus, equates all obfuscation with custom work by the least competent on hardest parts of software. You've only supported keeping amateurs out of obfuscation decisions unless following cookbooks for easy things that are hard to screw up (eg Grimes-style obfuscations). Additionally, you follow up with a ludicrous idea that people unable to use some installers/scripts (all my approach requires) can safely compose and configure arbitrary, complex protocols that such people regularly get wrong in single-protocol deployments. What lol...
So, I guess this thread has to be over until you refute the specific claims that have held up in both theory and practice when done by pro's. Further, you might want to test your theory by switching to Windows, OpenSSL, Firefox, etc because these have had most review and "pen-tests" by malware authors. Plus, publish all your config and I.P. information online in security and black hat forums for "many eyes" checks. Should make you extra safe if your theory is correct. Mine would imply using a Linux/BDS desktop w/ strong sandboxing, non-Intel where possible, LibreSSL, and automatic transformation of client programs for memory/pointer safety. Been fine, even in pentests by Linux/BSD specialists. Windows, OpenSSL, Firefox, etc? No so much...
Good luck with your approach of open, widely attacked software in standard configuraiton that you publish online. You're going to need it. :)
> some secrets (eg memory-safety scheme) require more than disclosure to break
And to that extent it isn't actually a secret. If the secrecy is providing you with anything then it comes at the expense of having a unique design but is lost by a compromise of any device. That isn't cost effective.
Your Microsoft link has an excellent exemplification of the flaw in thinking that leads to the erroneous conclusion that obfuscation is productive:
> Renaming the Administrator account can only improve security. It certainly can't hurt it.
The trouble is that it can. Renaming the Administrator account not only breaks poorly written malware, it also breaks poorly written legitimate software. Then the system administrator has to spend time and resources fixing a manufactured problem that could have been spent on other measures that achieve a better security improvement.
And you keep talking about things like diversity and sandboxing as if you can't use these things without hiding their design, but you can. Obfuscation of design is essentially useless because it has similar costs but a worse failure mode than other ways to improve security -- including the ones you keep talking about. Or layering independent systems.
You claim this layering is "ludicrous" but can you name a single major company that doesn't separately use all of those things already? Layering, for example, IPSec and TLS is the same work as configuring them separately. Being independent from each other so that a vulnerability or misconfiguration of one doesn't defeat the other is the idea.
Every security measure comes at a cost. You may need to configure more things etc. Which is why wasting resources on high-cost low-benefit measures like protocol secrecy harms actual security.
It is foolish to assume that HSAs don't already have access to whatever you, some guy on the Internet, know. So you can be an asshat, or do the right thing and try to 'protect the muggles' by letting information be available to all. They might feel threatened enough to pull valuables from unsafe places (such as networked computers).
It's not foolish if one understands high assurance security well enough that his recommendations preempted or predicted many specific things in TAO catalog. And most other stuff they did. It's easy once you understand nature of INFOSEC plus their nature.
No they don't have my methodology: I never put it into digital form. Use paper is rule No 1 in high security. The only people I talked go were ideological who I vetted. It's mostly in my head and on paper hidden well in areas with tamper-evidence. They're watching my comms but no black bag jobs. So the method is safe for now.
And what are you going to do with it? All of us high security engineers who arent in defense, sell-outs, etc are typically ignored by mainstream INFOSEC. People don't do shit even when we have evidence. People do less than that on open HW even with cheap FPGA's. You really think one of us is gonna burn our own security just to publish an obfuscation people won't use or give us anything for? Like hell...
You want write-ups and design details on high security I got plenty of that. Keep master copies on Schneier's blog. Will send them. Not giving away something this big that you cant even afford to use. Not till I have a replacement...
There is also this thing of legacy code... What if 3, 5, 10 years from now when you have moved on to your next super project, but someone has based system XYZ on your code. If they need to write a driver or similar, I'm sure they would be a lot happier to have a documentation that clearly states a design etc then a document saying: "This was hidden so big bad guys cant look!".
High assurance requirements going back to Orange Book's A1 class said the source code and evaluation evidence comes with the software. Further, customers were encouraged to re-run the tests & compilation, generate the system on-site, set it up according to secure configuration guide, and report any problems back to vendor. They become part of the evaluation process although real work was to be done by security professionals at evaluation lab and NSA's pentesters.
Here's an example of one from one of father's of INFOSEC that handed NSA's ass to them in 2 years of pentesting:
The original was based on custom hardware and firmware because Schell etc had good foresight. :) It was put on Intel because DOD and commercial sector wanted commodity products. So, probably issues at those layers that could breach security. Nonetheless, the kernel was what was designed with high assurance and will illustrate nicely what comes out of that. Note that this was the 1980's when it was designed. Still more secure, easier to review, and easier to extend than most modern software...
So, I'd have to supply customers the source to meet your requirement. I could also show them how to protect it. However, thanks to design approach, a high assurance proprietary system is easier to work with than a low assurance FOSS system. It's because the process absolutely forces that design to be a simple, modular, layered, and correct as possible. I wrote up some links on how that's done here in context of subversion and hardware:
Security through obscurity is specifically about secrecy in the design or implementation of a system, not the specific parameters used with the system. It's not a catch-all phrase referring to any possible secrecy.
I thought it meant that the only security came from secrecy of design or implementation. Pro's keeping a good design/implementation secret, esp hardware, provably increases difficulty for attackers. So it's a legit method we call obfuscation.
I'll go further to say it's a pre-requisite against nation-states as knowing design/implementation is first step to compromise. And they usually do.
I think the two ideas are conflated a lot, because it's so rare to have a secret system that is actually secure. In practice, a secret design or implementation is probably insecure, even if theoretically it doesn't have to be.
If the design or implementation is fixed in stone, then keeping it secret can only help, certainly. Most aren't, though, especially while they're still being developed. At that stage, if you can get smart people interested in helping, then openness will help far more than it hurts because you're far more likely to discover problems before the bad guys do.
It's usually true that the secret designs are insecure. I'll give you that. I only trust that which is done by pro's with review, esp with high assurance methods. Most things aren't like that. Easy to see how the interpretations would get conflated a lot.
Far as help or review, there's a false dilemma that goes around between something being so open I put it in a Hacker's News comment and totally closed. In reality, there's many things between with the review being more important than openness. I tried to address it in this write-up:
Goal was to get people to consider proprietary, open-source models more to ensure steady flow of cash to maintain/audit security-critical software. Dual-licensing at the least. Might prevent another OpenSSL debacle.
Far as actively developed, that's a good point. The trick there is to make sure you have guidelines for how to use the security-critical functionality and stick to them. The common stuff that's prone to error is done in a way that defaults to security. An example is how Ada or Rust approach memory/concurrency safety vs C. Not a guarantee by itself but raises bar considerably. The parts that are obfuscated have little risk on the security-critical aspects and are more about changing likelihood that shellcode will do its job. There's even methods in academia to do this automatically at compiler level.
So, this is what I'm saying. You should definitely have pro's in-house or (if possible) externally to review the design for flaws. Just do Correct by Construction, make design as boring as possible, and obfuscate the hell out of any aspect you can without introducing risk.
An example from my work when I did high security consulting was to put guards in front of any networked service. The guard blocked attacks like DMA, TCP/IP, whatever. The messages they passed along were simple, easy to parse, and landed directly in the application. The application itself had automatic checking of buffers, etc. My biggest trick, though, was to use the guard to fake a certain platform/ISA (Windows/Linux on x86) while app actually ran on different OS and ISA (eg POWER/SPARC/Alpha/MIPS). They'd hit it constantly with stuff too clever for me to even want to waste time understanding. Never executed because they couldn't see what they needed to see. And all the good security hardening, patches, monitoring, and whatnot. Strong security practices + effective obfuscation = TLA's remote attacks stood no chance. :)
Bullshit. Historically verifiable bullshit. See: Every security system ever that relied on not telling your opponents the details. When will you amateurs fucking read the literature and understand you're wrong?
It's actually most of the "pro's" and amateurs alike that have gotten people compromised for over a decade giving bad advice. The advice to all standardize on a few things that get the most attention by attackers resulted in botnets of hundreds of thousands to millions of nodes that required just a few 0-days & pre-made expliot kits with high reliability. My methods blocked all of that in the field despite tons of recorded attempts and even pentesting by pro's. Everyone else doing it had same results. QED. So, are you all continuing to offer the same bad advice because you're amateurs at real security or just involved in defrauding your customers for profitable consulting fees?
So, let me break it down again in concrete, proven terms suppored by decades of INFOSEC research and field work. History shows any complex system has vulnerabilities. To attack, the attacker must know the system, the vulnerability, how to exploit it, and the specific configuration. If defence in depth, then the attacker must know that for the whole route to them. So, you either have to make those have no vulnerabilities (good luck) or you have to implement measures to prevent their exploitation. That requires changing one or more of the pre-conditions of a successful attack. So, measures that eliminate vulnerabilities by design (eg Correct-by-Construction), prevent their exploitation with obfuscation/transformations, or deny enemy knowledge of their existence all provably increase security. I combine all of these in anything I do with strongest, most-analysed versions of each that are available. Positive field results followed while others get smashed repeatedly.
Note: This is especially true if the operational requirement in question, like protecting whole HW lifecycle, is in its infancy in INFOSEC techniques and has little to go on. Then, obfuscation, applying strong stuff where possible, R.E. samples (eg ChipWorks), and layers of detection/audit are best thing you can do. It's what we're doing now. You're side would suggest mask/fab/packaging companies should publish all source code and security methods online for attackers to study. Given what happened to desktops and servers, I'm glad they're listening to me instead of you. ;)
To clarify: this is to counter the statement that security through obscurity is a good thing. Yeah, that worked out well for the Enigma and countless other systems.
That commenter was replying to me with more nonsense like your selective example of Enigma that barely fits into the discussion at all. Worse, your example actually supports my side's position: they cracked the best crypto they had the second they knew how it worked and no methods were in place to reliably detect this. Today, they crack the complex systems and protocols people are using shortly after figuring out how they work as quality is so bad and everyone uses same ones. The modern example of enigma would be people using Linux desktops instead of Windows, Foxit instead of Adobe, unusual-but-good web servers instead of Apache, and so on to hope attackers not knowing will keep them safe. And it usually works, too, unless it's a targeted attack by pro's. That's saying something. ;)
My method combines vetted mechanisms with ways that adhere to their guidelines for secure usage, is directed by tools designed by security pro's, largely invisible to users, require a hack on system to find, and force custom, difficult attacks. One can mathematically prove that my strategies possess the traits I claim along with immunity from some issues and vastly improved probabilistic security against most others. So, all the evidence is on our side in theory and the field results where compromise is rare for us even in face of pro's whose bonuses require it.
Feel free to refute this by showing me how everyone using two browsers, OpenSSL, or a desktop OS (Windows) with no changes on the same platform kept them safe from major attacks. Or led to such a high failure rate for attackers that hacks were actually worth of news rather than scaremongering. My people were safe with my methods: some systems crashed or raised exceptions while many had no problems. I'm guessing you standardize-and-open types had the same experiences? No? :P
I got a few for you that show his way of doing things. The Lilith system was a start. Its approach of simplicity, layering, and consistency will help if you're lazy. Only a few here because I'm lazy after the huge posts I've been writing. ;)
Sure thing. I always laugh when I hear "full stack developer" in its main use. I'd think the title shouldve been reserved for people like Wirth who develop apps, OS's, firmware, compilers, assemblers, and the hardware they run on. That's literally every part of the computing stack.
Oh yeah, look up Juice project. Will need to use Wayback Machine. It replaced JavaScript in browsers with Oberon so you get safety plus native speed. Sent as compressed, abstract, syntax trees so used little bandwith without losing typing info. Faded like the rest but would've been awesome compared to the JS that bogs down even a Core Duo.
Thought you'd like it. Came to mind again in a recent conversation on Ethereum with their "new" idea of distributed, smart contracts with programming languages. Reminded me of agent-oriented programming we did in the 1990's with products like Telescript running whole stores and stuff:
Impractical really, but fun concept. You created a bot that programmatically represented your goals, limits, heuristics for analysis, and so on. It could, with 1-2 commands, pack its code & data into a file that moved to another "place." It could move to Amazon's store over slow WAN, analyse certain products for you, commit to the right one, notify you, record the receipt, pack that up, and come back with a report for you. Used interpeters with safe programming languages and sandboxes plus A.I. for some of them. Imagine how easy, efficient, and secure Juice applets would be for that, today's smart contracts, or Facebook. Now you can sigh again as you realize how much faster and safer things could be if a handful of companies merely learned from the past and made it an option.
Btw, back then, we foresaw problems coming from untrusted or runaway code on machines in these agent marketplaces. So, platforms like Telescript had interpreters that isolated execution on a per-agent (per-user) basis. The platforms set in then-small datacenters on servers that spun up interpreters on demand in response to API requests. They measured CPU, memory, storage, and network usage for billing purposes. Any of that sound familiar? Like some "new" invention we hear about on HN, etc all the time? ;)
The Verilog code for an acceptably efficient DRAM controller would be larger and more complicated than the Verilog code for the entire current system put together. It's not a good fit for a project that seeks simplicity.
The Spartan 6 FPGA can have DRAM controllers built in as a hard block, it won't use up any FPGA gates.
The issue is that you then need to interface this to the CPU, modern DRAMs like to transfer bursts of several words and you need somewhere to put this. The obvious target is to one line in a cache but as the linked email states a cache controller for the Oberon RISC would be more complicated than the designers wanted for the project.
Why do you care about efficiency? Use all your block rams for a cache to mitigate the dram inefficiency, and use the simplest possible controller. They can be quite simple, really, see this one for example: https://github.com/fpga-logi/logi-hard/blob/master/hdl/inter...
That's still more than twice as long as the current largest module, and now you need to add caches to the system. (To be clear, I'm not involved in Oberon's design, I'm just explaining the reasoning as I understand it.)
Understandable. OTOH, it brings such a design much closer to the realistic hardware, making it a more valuable teaching tool. Understanding the cache is important.
What do you do with one of these things? I get the concept of totally understanding the hardware & software, but once you bootup, then what? Are you presented with a command prompt? Can you use it for an IoT?
Or (on the software side) is its purpose to understand how compilers work?
>Oberon’s «desktop» is an infinitely large two-dimensional space on which windows (and documents, since the distinction becomes meaningless in some ways) can be arranged [...] When people held presentations using Oberon, they would arrange all slides next to each other, zoom in on the first one, and then simply slide the view one screen size to the right to go to the next slide.
It is commonly said that Plan 9 from Bell Labs is more UNIX than UNIX. This idea of sliding the view along an infinite plane, in turn, seems to me even more UNIX than anything I've seen or heard of. Not having used -- and in fact only having heard of in passing -- Oberon before, I can't comment on how Unixy the remainder of the system is, but none-the-less, this right here seems to capture the essense of UNIX. In all the times I have been impressed by well composed presentations, never have I questioned the need for "tricks" like screenshots and videos to get a coherrent presentation of interactive tasks or information from software external to whatever tool was being used to run the presentation.
I'm not so sure: the simplicity of the concept is nice but is-it really simple/nice to use?
I imagine that you could get lost quite easily in an infinite plane, so you'd need to be able to zoom out to find your windows but if you have too many windows opened then you have to zoom out so much than it's becoming hard to see which windows is what..
Workspaces seems a better way to organize your windows (even though it can also be difficult to find the workspace which contain the window you want).
When I think about zooming, I think of how Google maps work and also about responsive web design. That is to say, having differing amounts of information show at different zoom levels. It was my impression that this is how the Oberon desktop works but I may be mistaken in that assumption.
At 1:1, imagine a regular view.
Below some threshold effective resolution, imagine condensed information. Instead of miniature, unreadable windows, one might display a simplified view, say a Metro-style color coded box with -- relative to box size -- a big icon and document title.
Below yet another threshold, condense more, using conventional icons instead of Metro.
At lowest zoom level, display a one-dimensional list of everything. Searchable of course.
Something like a tiling window manager + a nice multitouch trackpad seems like ti'd be a good fit for the description/videos of Oberon's environment that I've seen.
Despite booting and testing Oberon on more than one occasion, I have personally never found a use for it. That said, it makes me happy to see people tinkering and creating things like this.
One of the goals of Project Oberon is a system that can be understood in its entirety by a single individual. A USB controller, or an HDMI encoder would take several books to describe.
There are hundreds of FPGA boards that do not meet the requirements of Project Oberon, in addition to the board you mention.
Might be fun. My spare-time projects fully support Plan 9, why not muck around with another zombie OS? EDID parsing should be easy, but the USB stack will be painful. And supporting "real" video hardware would be a pain. I'd have to figure out if it can run it on some hardware I own first, though.
Realistically, though, this is likely to fall by the wayside, since I only have so many hours a day coding, and enough of it is spent at work.
Oberon is a pretty interesting system to muck with.
Another approach is to blackbox USB and HDMI, and consider them outside the project, much like the VGA monitor or whatever is in the keyboard to send PS/2 signals. You can connect to spare IOs on the FPGA board with 'blackbox' hardware, as long as protocols are simple and open.
Exactly. I often recommend that anyway. Let's you offload all the interrupts and often plenty pre-processing. And microcontrollers with onboard peripherals are more numerous, powerful, and cheap than ever before. So many possibilities there.
I downloaded as much of the source of Bluebottle as I can. I'm glad the OP gave me the link to the modern one. I'll get it too. Think I might try to retarget it in one of the safer languages, secure hardware (eg Cambrige CHERI processor), or certified compilers. Oberon is the kind of thing a one-man project can make progress on. On other side, I think I recently saw another team quit after little progress on rewriting NetBSD in something safer. ;)
I don't have the link anymore. Doesn't matter. Despite NetBSD's portability and good design, it was simply too complicated and connected to C style for them to pull off the re-write. Oberon, on the other hand, has the whole OS written in a type-safe language with good modularity and is a lot simpler. Should be a much easier re-write.
Very nice. Afraid I won't be able to get one now, but for those that want to play now, and keep the real thing on a wish list, apparently there's (at least one) an emulator:
I imagine Prof. Wirth must have thought that 32032 wasn't going anywhere soon 20 years ago.
As I see it, the ARM instruction sets (note the use of plural) are a moving target (look at Cortex chips with Thumb2 only). They are far from RISC if you count them, and the instruction encoding is horrendulous.
It's like saying "why not target Windows? I doubt that's going anywhere soon"
With FPGAs, your only risk is that the manufacturer discontinues the chip you targeted, and you can then target another brand with the same (or nearly the same) HDL code.
100% spot on assessment! Wirth thought ahead well enough last time in making it modular and simple enough to easily port. They thought even better by putting it on a FPGA. And also with CPU and components simple enough to implement on ASIC with old node (350nm or 180nm). That's inexpensive enough to be crowd-funded.
Need the I/O stuff, etc to be implemented for the node, too, though...
20 years ago was 1995. NS32k was long dead then; National retired it in 1990 when it was only six years old. Even when it came out, it was kind of an underdog compared the the already-five-year-old 68k, which was already powering the SUN workstations and went on to power any number of Sun competitors plus the Lisa, the Macintosh, and the PalmPilot.
ARM has been around at this point for 30 years, and yes, there are incompatibilities from one to the next. But it'll probably be around in some form for another 30 years.
And, yes, your Windows binaries will probably still be runnable ten years from now, and your Linux binaries probably won't be.
I imagine Wirth chose 32032 for its purported elegance, not its longevity or commercial practicality. As I recall they had all sorts of initial problems and then lost the war for that market segment against the 68k.
I have heard many people talk about the NS32k in positive overall terms, I've researched it a few times, it looks like a nice CISC architecture, but not different/better enough than the 68k at the time.
I do recall the team under Jack Tramiel at Atari looked into the NS32k for their new 16/32-bit system, and even built a prototype, and then tossed it in favour of 68k because of purported bugs in the NS32k.
The 32032 instruction set was wonderful to generate code for. There were a few early bugs (if a floating point instruction crossed a page boundary, and a page fault was triggered, hilarity would ensue), but those were ironed out well before the chip was abandoned.
There also exist Oberon systems targeting ARM I think.
I assume the motivation for FPGA is the original Oberon idea: "its primary goal was to design and implement an entire system from scratch, and to structure it in such a way that it can be described, explained, and understood as a whole." (The original 1985 Oberon system was also running on custom FPGA
hardware.)
I'll add that ARM is a sue-happy organization whose processor licenses cost $1-15 million. MIPS gets so much traction in embedded space by charging a mere $700-900 thousand dollars. If we consider cost to implement, then the tiniest and legally-clear ISA possible makes the most sense.
However, my usual recommendation is SPARC given its ecosystem, openness, and mere $99 registration free. RISC-V, esp Rocket processor, will hopefully take its place. A custom, tiny RISC chip on a FPGA? Ok, ok, Wirth... that is more practical... (sighs) Lol...
I know what he means. As the other commenter pointed out, the question is whether ARM will be as popular 10+ years down the road. The chip Oberon was on was very popular back then. Not any more. So, they had to re-write significant portions of it to support new hardware.
Wirth decided it would be better to make a simple, custom processor that follows the spirit of the system. That by itself would be a fun project. He could put it on any current and future FPGA, something very unlikely to disappear. He then ported Oberon to it. Now, it's future-proof.
Maybe I worded it unclearly. The original version of Oberon ran on a popular, embedded chip. Those don't exist any more despite that people at the time thought they'd be around forever. The risk exists with ARM but less.
Oberon-related tech is also training for ETH students. They get plenty of experience trying to extend the OS, port the compilers, etc. They have little in hardware by comparison, though. FPGA's have gotten cheap and their nature means they'll stay around in some form forever. Verilog also goes back decades. So, there's the added benefit of teaching students to build a processor which could also run the system.
So, combining future-proofing, simplicity, OS education, and HW education requirements all into one meant a custom, Wirth-style processor on FPGA was best option. It also creates opportunities for things like TCP/IP, graphics, or security engines he probably didn't think about at the time. His students or FOSS volunteers might pick that up given how simple the HW is.
So, ARM is a bad idea. Those SOC's are too complex. Extending or re-implementing it needs a ridiculously-expensive license. They also sue people over even using the ISA, which makes me boycott them where possible. The FPGA doesn't have these problems plus allowed he and his students to attempt an ideal replacement that could last decades. So, that was his choice and one of the better options.
Note: I'd have used it as an excuse to get some grant money to do a full, low-cost, RISC-V implementation to put A2 Bluebottle on. More practical. That's just me, though. ;)
It would mean compromising the theme there of consistency-all-the-way-down.
What I would like to see instead would be RISC-V instead of their own ISA. At least there some learnings could be shared with other academics and tinkerers.
Tell that to the companies trying to push MIPS SOC's for Android smartphones. If it isn't ARM, they tell them to get lost. Likewise, for x86 + Windows. As ARM and Intel themselves say, being able to leverage the ecosystem of existing code, tools, talent, and so on is a huge differentiator. ARM on mobile (esp iOS & Android) has a massive ecosystem and many tool vendors whereas RISC-V or Wirth's CPU's don't have jack. So, manufacturers explicitly require ARM for high-end smartphones and SOC makers default on it despite ridiculous licensing cost.
Note: Areas like embedded in general with smaller, custom jobs don't worry about ISA's as much. Plenty of ARM, MIPS, PPC, x86, SPARC, Super-H, 16bits, 8bits... you name it. Much more fun space to be a programmer if you get to pick the cpu/board. :)
I'm in the GPU business. Here, ISA compatibility does not even exist and nobody cares. As long as you own an entire toolchain - a GLES/OpenCL driver, or, as in this case, an Oberon OS and a compiler - nobody gives a tiniest bit of crap about the underlying ISA.
So, I do not see any reason for the Oberon machine to rely on any "standard" ISA. It can even be compiled into some kind of a NISC, and nobody would notice any difference.
Oh I agree about Oberon and GPU's. My argument is for mainstream market for software in desktops and mobile. The ISA usually does matter.
Now, if we're talking GPU's, the effective ISA would be DirectX, OpenGL, OpenCL, etc. They're the standards that software and tooling target. So, does you product get by with not supporting any of those? Or do you have to comply with the your niche's standard interfaces and ecosystems, too?