Hacker Newsnew | past | comments | ask | show | jobs | submit | foohbarbaz's commentslogin

From a developer standpoint. I could get a minimal output via a serial port with a few lines of assembly on a bare hardware.

USB requires a lot more work and on Windows the API is atrocious. On Linux the API (libusb) quite a bit nicer, but still a bit of work.

The plug and play part, device naming and unique identifiers are a special "joy" of USB.

If serial port is sufficiently fast I'll take it any day over USB.


I was able to build a serial bridge between an SNES and a PC on a simple breadboard with stock parts: http://i.imgur.com/sPKGvl.jpg (the output is from a Teensy that is running as a USB<>serial device, so the PC sees it as /dev/ttyUSB0 ... the input and passthru connect DB9 to the controller port on the console, and the switch allows the original controller to work in place of the comm board.)

The Teensy driver code was around 10KB, and the PC code that opens /dev/ttyUSB0 was around 15KB.

Have been trying for over a year to implement a true USB version so that we can take advantage of the full bandwidth capability of the system, which is 2.68MB/s, which only USB high speed can do, and it's been nothing but a nightmare.

I will be really sad in the future when little toy projects like this are out of the hands of hobbyists due to costs and complexity. We've already lost that in the desktop operating system field, where video cards alone are more complex and undocumented than entire OS kernels these days.


You mean connecting to the EXT port at the bottom of SNES? I didn't even know that it had such thing. Sounds interesting, what are you trying to do?


Well, we are currently connecting to the controller port because it's really easy to connect to those. But of course you can only hammer at those registers at around 40KB/s.

So yeah, a friend made a PCB and found some female edge connectors we can cut to size to connect to the EXT port, where we can DMA at 2.68MB/s through using eight data lines. On the other side of the PCB, we just stuck a custom sized 28-pin IDE header. Easy to do whatever with that: wire to breadboard one-at-a-time or with an IDE cable.

The hope would be to then have some device that monitors each clock rise, grab the eight bits on the data bus, and send it to the PC. It can buffer a bit and have some latency, that's not a big deal.

But even with ICs that can latch the data quickly enough, we don't have enough bandwidth over serial nor USB1 to send 2.68MB/s of data. It'll have to be high-speed USB2, and will almost certainly need to be some kind of custom driver, as I doubt you can do some kind of super baud-rate of 16,000,000+bps.


> It'll have to be high-speed USB2, and will almost certainly need to be some kind of custom driver, as I doubt you can do some kind of super baud-rate of 16,000,000+bps

The venerable FT232H claims "data transfer speeds up to 40Mbytes/s":

http://www.ftdichip.com/Products/ICs/FT232H.htm

I'm not sure what is needed to actually accomplish such rates. Might be easier to do it with USB-enabled MCU. Either way, I wouldn't discount the standard USB classes too early. I'm "pretty sure" that you should be able to push some tens of Mbps through USB-CDC class.


What an ugly mess!

Somewhat related: I am amazed now much gossiping goes on in an entirely male work group behind other people's backs. Males are definitely worse than females in that regard. I would not be surpised if allegations were true. A bunch if primadonna devs can be nasty to work with. Just stay away and separate work and life.


Eh no, in my personal experience in a workplace where there are majority women you -will- get cliques and you will hear some nasty rumors go around. Women enjoy spreading gossip and harassing each other.

https://www.youtube.com/watch?v=3hmlPtRu1SQ


"The soldier's lot":

http://militera.lib.ru/research/suvorov12/07.html

is spot on! Been there.


BTW, speaking of unintended consequences: I just love how on some interstates they put up bright orange lights right near an exit and a tight curve afterwards.

So, you are doing 84mph (speed limit being 75) and go from relatively bright lit area into a dark and a curve. A second of blindness inevitably follows.

Still, I would probably rather just have a higher divider between lanes in different directions, so that high beams can be just used most of the time.


I drive a lot at night on I-70 between Denver and Green River. Visibility of the road is a big issue in the areas of tight curves. When I used to have an older car with weaker headlights having to step on brakes abruptly for not being able to tell which way the road ahead curves used to happen periodically.

Over the mountain passes in spring where the lane markings get worn down sometimes you just can't see anything w/o high beams (and risking to blind the oncoming traffic).


I am running a box built with Fedora Core 4 (2007 vintage). Never patch any systems. Why would I?

If I am running a service facing the internet, it's custom built and patches would do it no good. Why would I wait for vendor to release a patch? If a service is external, I will watch out for vulnerabilities and rebuild ASAP before any patches are out. Besides, 90% of the time my custom build is not even vulnerable to a particular problem.

If I am NOT running a service, why would I care about patches for it?

Why would I wholesale patch a server anyway? If somebody breaks in and gets a local shell, all is lost anyway. If they are not in, they are dealing with externally facing services only, see above. There are specific and counted number of daemons on every machine.

This whole patch-update thing is misguided and for people that want assurances and no responsibility.


As a security researcher, this approach just confounds me.

I've never had an update break my system, and if someone pushed updates that were broken, I wouldn't trust any old versions of their software any more than the current one.

And we keep finding that people don't update and miss critical vulnerabilities. There may be some admins out there that can independently track and patch every known vulnerability... but that seems like an impossible task for a box with any nontrivial amount of software on it.

And a lot of vulnerabilities aren't widely released. Updates sometimes coincidentally break zero days that were never publicly revealed.

I remember the world where everyone stubbornly refused to leave early versions of IE. Massive problem for security. The Chrome team looked at that and made the call to move to automatic updates. I'm still pretty convinced that's a better world.

You want to run a small box that barely faces the internet where you constantly write your own patches in parallel with the primary software developers, while also researching and patching new vulnerabilities before they are deployed, go for it... but when that becomes the industry norm, I consider it extremely harmful.

Maybe you can pull that off, but most people are not nearly that cool.


I think you misunderstand. It's not that people are pushing out crap updates. Rather, the problem is that when you update one thing on linux you usually end up having to update 100 other things.

I'm in a similar position to the OP, in that I don't generally update linux systems. The problem is that there is no way to simply 'update everything' in linux (at least, not in Centos). yum update certainly doesn't do it - in Centos 5.5 it only gets you php 5.1.x. To get a newer version you have to update it manually or bodge yum.

Then the problem is that many newer packages require a newer glibc or whatever, and that is something that can break your entire system very easily.

I think the root of the problem is that linux isn't very easy to update, unlike Windows.

As long as your linux system is well locked down and you regularly keep an eye on it, I don't see a problem with not updating regularly.


That makes a lot of sense, thanks. I had always sort of seen Linux as easier to update, since it's a single command, but you're right... that command doesn't necessarily get you all the way. Things are going to vary from distro to distro, and none of them will necessarily roll in the bleeding edge version of whatever thing you want the day it launches. And then, custom code is vital on a lot of machines for a lot of applications, and it will introduce its own dependencies.

That said, these factors really complicate security advice on patch management. If customers could be trusted to lock things down and keep an eye on them, that would be a much better world. And I'm sure a lot of admins out there are more than capable, but I worry about the Dunning Krueger effect catching some admins off guard.

But ultimately, this is just a battle of emphasis more than disagreement. The answer isn't "everyone should always patch everything," it just depends on a lot of factors.


Everything you said is very general. The article in question talks about "Linux servers" (or that's what I read). Speaking of those, could you explain specifically, what is the point of a wholesale update, as opposed to what is described? What I mean is running specific set of services (which most often built from source anyway) and keeping those services up to date.

What you get in return is both stability and security, since you don't wait for vendor to release a patch and actually understand what the vulnerability is.

With wholesale patching, in fact, you can never be sure whether your system is secure with respect to all published vulnerabilities.

Another interesting detail is, that with services running built from sources often you end up with vulnerabilities not applicable to your configuration.

Oftentimes, you can just tweak the config instead of changing code (and potentially breaking running things).

Software updates is just a cop out for people that are too lazy to pay attention to security.


In my experience, the number of admins that say they can just stay on top of the vulnerabilities is greater than the number who actually can or do.

> Everything you said is very general.

Ok, here's specifically how this approach fails in the real world:

A guy is ignoring vulnerabilities that don't seem to apply to his configuration. So there's a kernel flaw that allows privilege escalation. He thought it was no big deal because he doesn't allow a guest login. Then there's a flaw that allows remote users to trigger memory corruption, allowing remote guest access. No risk there, he thought, because guests have no privileges on the box.

You see how the attacker got in?

You might counter that that admin was just too "lazy" to line up all the vulnerabilities and see how they interact. But there were almost 200 vulnerabilities last year just in the kernel. Are you going to conduct the 19,000 security audits required to see how they interact? What about groups of three? What about vulnerabilities in other packages? This workload doesn't just go up linearly.

Also, this approach is weaker against unpublished vulnerabilities. If you're strapping together older software, especially deprecated stuff no one else is using, you're losing one perk of open source software, the many eyes shallow bugs bit. People not using your configuration means no one is going to discover vulnerabilities on it except for attackers. You may think, "good, that makes it harder for attackers," but that's security by obscurity, it doesn't actually hurt attackers very much. Vulnerabilities patched in newer versions of software give insight into vulnerabilities lingering in older versions, so often exploits can be crafted for previous versions far more easily than for newer.

This is why security professionals recommend defense in depth. You don't know which part of your platform is going to break and allow attackers to exploit vulnerabilities that you didn't think were relevant.

Also, for a lot of systems, eventually the admin will change. The guy or gal that follows you will be dependent on the system you set up, and may be less experienced or less capable. If they inherit a patch management system that basically entails, "Become a security expert in addition to your other duties," they are not going to strictly adhere.

You know your system and your situation, so I don't want to sound like your approach can never ever work. But if we're giving advice to the unwashed masses, I think we need more advice tailored for the people you dismiss as "too lazy to pay attention to security," because that describes basically everyone.

PS - When you describe other people's opinions as "lazy cop outs," it can kill discussions. You might watch out for that. HN isn't Reddit, people sometimes bury stuff that has good substance but poor tone.


Please tell me you are joking.

The is no way you could possibly keep up with every vulnerability of everything installed on your server.

> If somebody breaks in and gets a local shell, all is lost anyway.

That is not true at all. You should run your server such that someone could get a shell running as the apache user - and still be able to do very little. They could read files and the database (which is bad), but not modify any files (which would be worse).


Of course I can and will keep up with every vulnerability for every service that is running and facing the Internet.

I do not accept the risk of waiting for some vendor to release a patch. If there's a hole, read the report, determine whether your config/build is vulnerable, rebuild.

Why would want to patch something you are not running or use?


WRT local shells, it might well be a good idea to assume someone who got a shell as apache could use a privilege escalation 0-day and do some more damage. Hopefully your deployment process is such that starting from scratch isn't a huge hardship. I'd appreciate of someone with actual security experience (not me) weighed in...


It's very naive to think that you can protect a box from somebody with a local shell. Never worked.


I can't make head or tail of this comment. Are you saying you think it's a bad idea to keep my Linux kernel and Nginx up to date? What good does it do to "rebuild ASAP" unless you've at least downloaded source updates from the developers? Or are you telling me you write your own security fixes for all the software you use in public-facing services?


Of course it's a bad idea to make unnecessary system changes (install patches) that bring system to essentially unknown state that nobody ever tested (the order and set of patches installed over your specific OS configuration).

You only patch what you need to patch. Most of the time for every production service you end up building a custom version anyway. Patching does no good to those.

So, by patching you only bring potential harm and overhead of going through change control processes.


"Our" affair is over? Gee, too bad my household does not know it yet and a 2 year old laptop has been demoted to kids machine for playing Minecraft... _Our_ love affair with a PC is over. Likely will never buy again.


In practical sense .NET does not exist outside of Microsoft world. Most of those who mention Mon have never used it (or non-MS environments for that matter). Don't waste time on "Microsoft-Java". Either learn Java, or better yet start with Python.


Calling C# "Microsoft-Java" is about as accurate as calling Go "Google-C".


No, unless you want to shut yourself out of non-Miscrosoft ecosystem. Considering that learning a language is a long term investment and MS's star is waning, it's probably not a good idea.


If only C# programming skills (and programming skills in general) were transferable to other languages. Oh wait.


Having spent 15 years in the US (I am Russian), now I am somewhat shocked when I don't get a smile from a cashier in a store in, say, Billings, MT.

Navajos, who don't make eye contact or smile either also take some getting used to now.

It's cultural, but Russia is a harsh place, so it goes deeper.

The part about people in the street not being helpful to each other is also true.

Just watch those dash cam videos where people witness a horrible crash, drive around and keep moving. You can die in the street and nobody might care. It's cultural and it's not a good thing.

The mob mentality is also a lot stronger. Westerners are a lot more tolerant to people being different or choosing their own way. It is in fact encouraged to be different and individual. Russians on the other hand, have whole layers of culture dedicated to making sure everybody is not sticking out of the crowd and not being an individual.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: