As an Airbnb host I can just as quickly tell you stories of exploitative guests who are chronic abusers of the system, attempting to get refunds by threatening narratives like this that they know have the potential to get sympathy and traction with Airbnb or on social media. In almost all of these cases it ends up being one persons' word vs. another. An accusation is far from proof, but hosts most often stand to lose.
Of course, everyone comes from their own particular point of view and/or bias.
I'm a host. The POV I see this from is that of someone who pays close attention to the market and the changing perception of short term rentals. I've read far enough beyond the headlines to know that these accusations are very often not what they seem, and that this narrative is being blown way out of proportion considering how infrequently it actually happens.
The POV a sub-segment of NYT readers see this from is one of being righteous about short-term rentals (in theory at least.)
The POV of writers and editors at NYT is to respond to their readers' preferences.
It definitely depends on the nature of your work, but the notion of having a channel I need to check hourly makes me ill. If I’m needed I should get a notification, and if I’m involved in an active discussion, I’m there. Otherwise I’ll catch up on a daily basis.
This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame. The article cites AI systems that the FDA already has cleared to operate without a physicians' validation.
> This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame.
Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.
Let me know when the bean counters sign off on fleets in the millions of vehicles.
Yes and I would swear that 1700 of those 2000 must be in Westwood (near UCLA in Los Angeles). I was stopped for a couple minutes waiting for a friend to come out and I counted 7 Waymos driving past me in 60 seconds. Truth be told they seemed to be driving better than the meatbags around them.
You also have Mercedes taking responsibility for their traffic-jam-on-highways autopilot. But yeah. It's those two examples so far (not sure what exactly the state of Tesla is. But.. yeah, not going to spend the time to find out either)
I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.
I have to admit if my life were on the line I might be that Karen.
Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?
Let's look at mammography, since that is one of the easier imaging exams to evaluate. Studies have shown that AI can successfully identify more than 50% of cases as "normal" that do not require a human to view the case. If group started using that, the number of interpreted cases would drop in half although twice as many would be normal.
Generalizing to CT of the abdomen and pelvis and other studies, assuming AI can identify a sub population of normal scans that do not have to be seen by a radiologist, the volume of work will decline. However, the percentage of complicated cases will go up. Easy, normal cases will not be supplementing the Radiologist income the way it has in the past.
Of course, all this depends upon who owns the AI identifying normal studies. Certainly, hospitals or even packs companies would love to own that and generate that income from interpreting the normal studies. AI software has been slow to be adopted, largely because cases still have to be seen by a radiologist, and the malpractice issue has not been resolved. Expect rapid changes in the field once malpractice solutions exist.
From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.
If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.
I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.
But since we don't know where those false negatives are, we want radiologists.
I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.
The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.
That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.
Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.
Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.
I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.
It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)
If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.
The FDA can clear whatever they want. A malpractice lawyer WILL sue and WILL win whenever an AI mistake slips through and no human was in the loop to fix the issue.
It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it
What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper. If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
> What's the difference in the lawsuit scenario if a doctor messes up?
Scale. Doctors and taxi drivers represent several points of limited liability, whereas an AI would be treating (and thus liable for) all patients. If a hospital treats one hundred patients with ten doctors, and one doctor is negligent, then his patients might sue him; some patients seeing other doctors might sue the hospital if they see his hiring as indicative of broader institutional neglect, but they’d have to prove this in a lawsuit. If this happened with a software-based classifier being used at every major hospital, you’re talking about a class action lawsuit including every possible person who was ever misdiagnosed by the software; it’s a much more obvious candidate for a class action because the software company has more money and it was the same thing happening every time, whereas a doctor’s neglect or incompetence is not necessarily indicative of broader neglect or incompetence at an institutional level.
> If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
To make a fair comparison you’d have to look at how many more people are getting successful interventions due to the AI decreasing the cost of diagnosis.
> What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper
The doctor's malpractice insurance kicks in, but realistically you become uninsurable after that.
yeah but at some point the technology will be sufficient and it will be cheaper to pay the rare $2 million malpractice suit then a team of $500,000/yr radiologists
This is essentially what's happened with airliners.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
No - planes cannot "land themselves with zero human intervention" (...). A CAT III autoland on commercial airliners requires a ton of manual setting of systems and certificated aircraft and runways in order to "land themselves" [0][1].
I'm not fully up to speed on the Autonomi / Garmin Autoland implementation found today on Cirrus and other aircraft -- but it's not for "everyday" use for landings.
Not only that but they are even less capable of taking off on their own (see the work done by Airbus' ATTOL project [0] on what some of the more recent successes are).
So I'm not sure what "planes can land on their own" gets us anyway even if autopilot on modern airliners can do an awful lot on their own (including following flight plans in ways that are more advanced than before).
The Garmin Autoland basically announces "my pilot is incapacitated and the plane is going to land itself at <insert a nearby runway>" without asking for landing clearance (which is very cool in and of itself but nowhere near what anyone would consider autonomous).
Taking off on their own is one thing. Being able to properly handle a high-speed abort is another, given that is one of the most dangerous emergency procedures in aviation.
Having flown military jets . . . I'm thankful I only ever had to high-speed abort in the simulator. It's sporty, even with a tailhook and long-field arresting gear. The nightmare scenario was a dual high-speed abort during a formation takeoff. First one to the arresting gear loses, and has to pass it up for the one behind.
There's no other regime of flight where you're asking the aircraft to go from "I want to do this" to "I want to do the exact opposite of that" in a matter of seconds, and the physics is not in your favor.
How's that not autonomous?
The landing is fully automated.
The clearance/talking isn't, but we know that's about the easiest part to automate it's just that the incentives aren't quite there.
It's not autonomous because it is rote automation.
It does not have logic to deal with unforeseen situations (with some exceptions of handling collision avoidance advisories). Automating ATC, clearance, etc, is also not currently realistic (let alone "the easiest part") because ATC doesn't know what an airliner's constraints may be in terms of fuel capacity, company procedures for the aircraft, etc, so it can't just remotely instruct it to say "fly this route / hold for this long / etc".
Heck, even the current autolands need the pilot to control the aircraft when the speed drops low enough that the rudder is no longer effective because the nose gear is usually not autopilot-controllable (which is a TIL for me). So that means the aircraft can't vacate the runway, let alone taxi to the gate.
I think airliners and modern autopilot and flight computers are amazing systems but they are just not "autonomous" by any stretch.
Edit: oh, sorry, maybe you were only asking about the Garmin Autoland not being autonomous, not airliner autoland. Most of this still applies, though.
There's still a human in the loop with Garmin Autoland -- someone has to press the button. If you're flying solo and become incapacitated, the plane isn't going to land itself.
One difference there would be that the cost of the pilots is tiny vs the rest that goes into a flight. But I would bet that the cost of the doctor is a bigger % of the process of getting an x-ray.
They have settled out of court in every single case. None has gone to trial. This suggests that the company is afraid not only of the amount of damages that could be awarded by a jury, but also legal precedent that holds them or other manufacturers liable for injuries caused by FSD failures.
At the end of day, there's a decision needs to be made and decisions have consequences. And in our current society, there are only one way we know about how to make sure that the decision is taken with sufficient humanity: by putting a human to be responsible for making that decision.
Medicine does not work like traffic. There is no reason for a human to care whether the other car is being driven by a machine.
Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.
When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.
There's some sort of category error here. Not every doctor is that type of doctor. A radiologist could be a remote interpretation service staffed by humans or by AI, just as sending off blood for a blood test is done in a laboratory.
> There is no reason for a human to care whether the other car is being driven by a machine.
What? If I don't trust the machine or the software running it, absolutely I do, if I have to share the road with that car, as its mistakes are quite capable of killing me.
(Yes, I can die in other accidents too. But saying "there's no reason for me to care if the cars around me are filled with people sleeping while FSD tries to solve driving" is not accurate.)
You know, for most humans, empathy is a thing; all the more so when facing known or suspected health situations. Good on those who have transcended that need. I guess.
Non-blinding headlights already exist. Modern projection headlights can map where the light ends up on the road to illuminate your path while avoiding oncoming traffic. It just isn't widely adopted (in the US at least) as of yet.
It is here and sucks on curvy roads. My commute is down a mountain canyon and if I'm on the outside of a curve (turning left) the incoming traffic does not detect my headlights and I'm blinded for the entire curve. I want them banned. How hard is switching between high and low beams?
We're not talking about auto high-beams. We're talking about headlights that mask out a portion (of even the normal beam) based on where other cars are.
> The recognizing other cars part of those systems is… not great. (yet? hopefully.)
Or bicyclists or pedestrians. We have all of automotive histroy to demonstrate that blinding others isn't necessary for driving, not even for comfort-level safety gains.
I don't know how he'd be deciding which oncoming cars are equipped with this feature, as it's still uncommon. And he said " How hard is switching between high and low beams?" which seems to be more talking about auto high beams.
Better -something- that's trying to mask low beams than the alternative (nothing).
> I don't know how he'd be deciding which oncoming cars are equipped with this feature, as it's still uncommon.
The technology is required on some types of headlights (which you can recognise), because…
> Better -something- that's trying to mask low beams than the alternative (nothing).
…they also made low beams notably brighter and reach further (= extended the angular output). The alternative isn't nothing, it's less bright low beams.
Adaptive headlights have only been approved for use in the US for ~3 years. They were sold in cars in the US before that, but the adaptive function was disabled.
> On country roads, it’s extremely valuable for keeping the shoulder lit up with high beams to see things like fear and bicycle.
It is my experience that bicyclists and pedestrians aren't partial to the endless passing vehicles that are blinding them. Seeing is part of how they keep out of drivers way. I disagree that we should ruin their vision just so drivers can seem them even more than they used to.
> Modern projection headlights can map where the light ends up on the road to illuminate your path while avoiding oncoming traffic.
Ask any EU trucker about this and they will curse you out with the most creative expletives you have heard in your life. At least the existing systems are apparently hot garbage, especially on highways where some oncoming truck headlights might be hidden by the median yet you can still blind the trucker themselves (since they're higher up).
I don't think about any of this and never have. My 2022 Model Y has 60,000 miles on it and the battery has only lost a negligible amount of health/range since I bought it.
The couple of times I’ve even done as little as fly through Heathrow it has been apparent to me that the UK is on its way to becoming an unfettered surveillance state, and I never hear anyone talking about it.
You say "on its way" as if it hasn't been at the forefront of this for decades. Until China and post-9/11 US ramped up facial recognition and CCTV projects MASSIVELY, the UK didn't just have more CCTV units per capita than anywhere else on Earth, they had the most in absolute terms. Even now last I checked the UK has about 1 camera for every 11 people.
Yes, the one time I've tried mushrooms it was a very unpleasant experience. For weeks I was left feeling like I had done some permanent damage to my mental health. I eventually got past that feeling and there might be a point I try them again, but not without professional guidance. Psilocybin is powerful and not a remotely recreational thing (for me at least.)
The first time I tried them, it was like I peaked behind the “curtain” in the Wizard of Oz, and knew even in that moment I’d never be able to unsee or forget it. It was the equivalent of being a child and realizing Santa didn’t actually exist.
Life as I had known it, the things that then animated me, were “shown” to be a pantomime - a joke. It was tremendously sad, and - for better or worse - I’ve never been the same since.
Maybe it was a coming of age experience - something I would have more painfully experienced later anyway. But it cost something significant. It changed me. Still, some 25 years later, I don’t know if it was for the better.
Can anyone tell me why I have several devices in my home that demand a certain USB-C cord in order to charge? They are mostly cheap Chinese devices that won’t acknowledge a more expensive (e.g., Apple) uSB-C cord plugged into them. Even when plugged into the same transformer. They only charge with the cheap USB-C cord they came with. What gives?
Because the USB Consortium made a terrible mistake. Instead of speccing USB-PD power supplies to default to 5V <3A when there are no resistors in the port of the other device, the default is do nothing. So in order to be in spec, you have to refuse to charge non-compliant ports. This means the compliant power supplies are worse, in a way. So you need to use a "dumb" USB-A power supply and USB-A to C cable, which does default to 5V <3A to matter what. As for why some devices choose to use non-compliant ports - I assume it's extreme cheapness. They save a penny on a resistor.
At this point I'm even surprised that compliant cables and chargers exist so the GP can have that problem.
But I believe the specs are that way to avoid problems with OTG devices. If both devices decide to just push a voltage into the cable at the same time, you risk spending a great deal of energy or even start a fire. That said, there are better ways to deal with this issue; those are just slightly more expensive.
I think the Apple USB-C charger I have is compliant and so is the cable. I actually use it to charge my Samsung phone primarily, but inadvertently discovered that it won't run a Raspberry Pi 4 at all. The $12 adapter that is sold for that purpose runs the Raspberry Pi 4 just fine. Apparently because it just supplies 5 volts all the time, no matter what the device says.
The Raspberry Pi 4 has a design error in its USB-C circuitry.
It does include a pull-down resistor, but wired incorrectly (compliant devices need two), which results in compliant chargers only correctly detecting it when using a “dumb” (i.e. containing no e-marker chip) USB-C-to-C cable. Your Apple cable probably has a marker (all their Macbook charging cables have one, for example).
Thanks for the explanation. I actually found out the USB-C plug can act as a USB device. At USB 2.0 speeds oddly enough. So I have all my Pi 4s configured now in that mode and I just power them through the 5 volt header, which seems simpler. Albeit less convenient.
I had to get this USB "power blocker" that only passes the data pins through, otherwise the Pi runs off the computer it is plugged into all the time
That's because USB 3 is not natively provided by the SoC on the RPi 4, but rather by a dedicated IC, connected to the main SoC via PCIe :)
Hence there's two USB 3 ports and two USB 2 ports, but wired completely differently internally! Presumably the USB-C port is connected to the SoC more directly.
If it is all 5 volts, it will not do much. But perhaps that screwball PD stuff would get you in trouble. The OTG stuff, just concerns who is the usb host. Where the otg cable instructs a normally client device to act as a usb host, Where the holy grail was to find that magical OTG cable that would let me charge the phone while it was acting as host. Hmmm... on reflection, this would be any dock right?
And a rant, for free: Holy smokes, I think OTG may be the second most braindead marketing dribble sort of acronym to come out of tech, right behind wifi(wireless... fidelity? what does that even mean?)
No need to have a separate USB-A brick - simply have a USB-C brick plus C-to-A adapter. An adapter will force 5V down the port no matter what. But afaik you still need USB-A cable (or another adapter?), which kinda defeats the whole idea of having just one cable.
I guess this is only partially true, as I have a A-to-C charger cable from Huawei that works with everything except my Pixel 4A phone. And my Pixel 4A phone works with everything except that specific cable.
USB A->C cables are supposed to have a Rp pullup on CC1, and leave CC2 disconnected. Huawei made some A->C cables which (incorrectly, and spec-violatingly) have Rp pullups on both CC lines, which is how you signal you're a power sourcing Debug Accessory
Your Pixel 4A is entering debug accessory mode (DebugAccessory.SNK state in the USB-C port state machine); other devices probably don't support debug accessory mode and just shrug.
Maybe the cable is missing the CC pin resistors (all USB-A to C cables are supposed to have them to identify themselves as such), and maybe only the phone cares.
Not related to this exact problem: but also note that the cheapest cables max out at 60W, anything more power needs a new special cable that does its own smarts and communication regarding proving it can handle more power (total nightmare I found, also some computer manufacturers have years of usbc-pd bugs still troubling computer bootups)
It's not a terrible mistake. A terrible mistake would have been having such power available on ports that even a reasonable person might short out by innocently connecting a USB C cable between them.
A couple 5.1k resistors add about $0.00001 to the BOM cost. The terrible mistake is on the designers of devices who try to forego these.
It's really not the BOM cost that drives these decisions but the assembly cost of adding placements. Time on the assembly line is very valuable and doesn't have a simple / clean representation on a spreadsheet. It's dependent on the market and right now assembly line time is worth a lot.
That is exactly the reality. I work in a place where we build HW. The resistor costs almost nothing. But installing it, having it in stock, making sure the quality of yet another component is correct, eventually managing another vendor, all costs. So much, that the cost of a resistor we put a value of some cents (up to ten) even when the part itself cost so little that the software has problems tracking it.
Except that connecting 5V to 5V does not cause a short circuit. No current will flow without a voltage difference. If there is a difference, the capacitors in one of the power supplies will charge up to the same voltage and then current stops flowing again.
That would be true if both sides were exactly 5.0V, but they're not. There's a 5% tolerance, from 4.75V to 5.25V, and in practice you will see chargers often run "5V" at 5.1V intentionally, to account for resistive loss in the charging cable. If you accidentally connect your "5V" device to the host's "5V" you may find that the host simply disables the port, which has happened to me more than once. So no, you can't just blindly connect them together.
What do you mean by "blow"? There's often a polyfuse which will trip, and needs to cool down to reset. I haven't seen a normal fuse but I believe it's possible. Efuses are also common, to allow the system to automatically reset as soon as the fault condition is removed.
It's unlikely that anything will be damaged, but the device likely will not work until the issue is resolved.
No it will not. Have done it thousands of times. No no no. You can downvote all that you want, but you will still be wrong. It will happen nothing. Period. If you do not know about it, educate yourself before downvoting and commenting about fuses.
Because of all these supplies work with transistors they do not act as a load to the other. Is like if the 2 had a diode in the output (in fact they do have one, but not directly in the output).
This is my typical experience in HN lately, is getting full of people with absolutely no idea what they talk about, and are constantly downvoting good comments.
>Because of all these supplies work with transistors they do not act as a load to the other.
This is utter nonsense, Ohm's law doesn't magically stop working with a transistor. I do know about this, I've designed power supplies and USB devices, and I've destroyed more than a few components accidentally by connecting two switching supplies together. Yes, there will be current flowing, and yes, sometimes a fuse or breaker will trip, I have experienced this many times, and just because you haven't doesn't mean it doesn't happen.
>Is like if the 2 had a diode in the output (in fact they do have one, but not directly in the output)
Sounds like you're referring to either ESD protection diodes, or flyback diodes, neither of which do anything in the case of two similar but unmatched power supplies.
I'd advise you to get a degree in engineering (as I have), or do some serious studying, as this kind of uninformed discussion is not productive or helpful to anyone, it's just noise.
Wow. You really are interesting person. You obviously have no idea what you talk about, but keep insisting…
Man… you are really a nice case.
Let me make a last attempt, even when I know it will fail:
1) “Ohm's law doesn't magically stop working with a transistor”
Ohm’s law works only with linear components, is a linear relation. So NO it does not work in a transistor or a diode. No it doesn’t. No because of your magic ignorance, but because they are not frigging linear! Go study some physics.
2) No, I was certainly not referring to ESD diodes, but the rectifier, at the end of any SMPS. Some may have a last stage linear regulator, in that case the diode is part of the juncture of the output transistor. At any rate, ANY wall mounted power supply, and 99% of all supplies in the world, when the output is higher that the target voltage will just shut down. GO TEST IT AND STOP with your nonsensical replies.
BTW: the 1% of supplies that do regulate down are called “4-cuadrant-supply” are much more complicated, and expensive, and makes no sense to use in a USB charger.
I don’t care which degree you have, if you really do, and was expensive, ask for your money back. In case is not a degree in prompt engineering…
I know what I'm talking about, I am a professional working in electrical engineering. Please don't insinuate that I don't, it's unnecessarily insulting.
If you have two power supplies at different voltages and connect them together, there will be a finite resistance through the cable and Ohm's Law applies. Current will flow. With a low resistance and big enough voltage difference, there will be a significant current, and it can trip the supply. This is not difficult to achieve.
The rectifier you're referring to is the flyback diode in that case. But now as you've said yourself, the power supply will shut down if the voltage coming in is too high, which is frequently due to either a polyfuse or an efuse tripping. So it sounds like you're just arguing to argue, while actually agreeing with my point. You said "nothing will happen", but if one shuts off, something has happened.
I don't need to test this, I have done it. I also have quite literally thousands upon thousands of other engineers, books, universities etc backing me up, and you do not. Connecting two USB supplies together is a bad idea and will likely result in one switching off. Don't do it.
Either way, I'm done trying to convince an amateur. Feel free to do what you want.
I don’t know at this point if you have fun trolling around or what.
“If you have two power supplies at different voltages and connect them together, there will be a finite resistance through the cable and Ohm's Law applies.”
That is an error that only a person with minimal knowledge from TV shows can do. That is absolutely bot true for regulated power supplies, SMPS or Linear, becauae (even if the later is called linear) they are not linear. So no, you cannot apply Ohm’s law.
“The rectifier you're referring to is the flyback diode in that case. But now as you've said yourself, the power supply will shut down if the voltage coming in is too high, which is frequently due to either a polyfuse or an efuse tripping”
Noooo I’m talking about the diode at thr end of any SMPS, buck, boost, fly-back or whatever type, or even a linear regulator. Is about how a regulated power supply works: as the voltage goes up in the output, it shuts down, stops the output current, in an intent to liwer the voltage. There is no fuse in 99% of supplies out there, because they have active protection. Is called a transistor, not a polyfuse.
“I also have quite literally thousands upon thousands of other engineers, books, universities etc backing me up”
That is what you think, but you are wrong. Sorry mate. If you would just go to a lab and put 2 power supplies in parallel, like I do pretty much daily, you will see you are wrong.
You are the most ignorant, obstinate and arrogant person I’ve seen in HN… and here is full of it. I’m 100% sure you studied CS in a medicre university, work as sw dev, and think you can talk with an EE. You say you work with electrical stuff, but your other posts reveal you are a SW dev. Obviously you are a liar, and trying to be right when you are sooo obviously wrong.
I do care what you do, because with such an arrogance and incompetence together, you are going to get somebody hurt or killed. Please start studying and stop being so stubborn when you are just wrong!
akshually (hehe sorry couldn't resist) ohms law keeps working, its just that the resistive component in the formula varies according to the applied voltages because of active components but ohms law continues to obey the math that results. very trivial observation tho, so downvote if you must lol
They are likely not following the USB spec correctly. Things like pulling certain pins high or low or having a set resistance between certain pins or communications between the host and device will all affect what goes over the wire and whether the host or the device will accept this. Cables will also have some pins entirely unconnected.
Cheap, bad, shortcuts, etc. will result in an out of spec cable being necessary for an out of spec device to work correctly with an in or out of spec hub. It's terrifically frustrating but a fact of the world.
And this isn't just random no name knockoffs. The Raspberry Pi in certain versions didn't correctly follow the power spec. Both the Nintendo Switch and Switch 2 either incompletely, incorrectly, or intentionally abused the USB spec. The Lumen metabolism monitoring device doesn't follow the USB spec. This is one of those things where you want a bit of a walled garden to force users of a technology to adhere to certain rules. Especially when power and charging is involved which can cause fires.
> This is one of those things where you want a bit of a walled garden to force users of a technology to adhere to certain rules.
That’s what consumer protection laws with teeth and electric safety certifications like CE or UL are for, not walled gardens.
History has shown that relying on hardware DRM, like Apple did with Lightning doesn’t prevent manufacturers, from doing dangerous things, because they’ll find ways around it sooner rather than later.
Some badly designed USB-C devices don’t properly negotiate power supply, and as a result, only USB-A (since these always output 5V without any digital or electrical negotiation) or other non-compliant USB-C devices will actually charge them.
I’ve experienced this too and it’s not just no-names. I have a wireless gaming keyboard from SteelSeries, certainly a very legit brand. I lost the original USB-C cord. Tried every USB-C cord I could find, and they power the keyboard and charge it to exactly 1%, but no more.
Found plenty of people online with the same issue but no resolution.
Finally just paid the $25 to get the OEM SteelSeries replacement cable and it charges fully again. wtf… I guess the replacement cable was USB-A to C and I’ve only tried USB-C to C cables?
Actually, in most situations with this problem it is possible to solder 2 additional resistors inside the offending USB-C device. I have done that on a flashlight and can confirm that it fixed the problem.
Adding SteelSeries to my never buy list, along with Unicomp (Unicomp's literally died on me weeks after the 1 year warranty ended. Got told to buy another at full price, went to Ellipse instead at modelfkeyboards dot com for 4x the price and never been happier).
o.O i never knew usb could even do that... honestly some good tip here.. how did u find that out ? i would of never guessed for newer usb this was a thing
It’s not really PD. It’s just they aren’t usb c spec compliant at all. USB-C has the power pins at 0v by default, and you have to signal there is a connected device to activate 5v. While usb-a has 5v hot all the time.
Since there aren’t any active chips in these cables, an A to C cable happens to have 5V hot on the usb c side, but this should not be relied on as it isn’t true for C to C
PD is optional for USB-C devices, but these out of spec devices don’t even support the basic USB-C resistor-based identification scheme (which is mandatory).
I have purchased multiple devices like this over the years. In all cases, it is that it doesn't have whatever circuitry is required to have a USB-C PD charger send 5v down the line. Using a USB A to C cable works every time. Ironically, using a C to A then A to C then makes it work with a USB-C charger.
In order to get anything from a USB-C power supply, a device needs to have 5.1kΩ resistors from the CC1 and CC2 pins of the USB-C port to ground. Devices that cheap out on these two resistors (which cost well under a cent each) will not get any power from the power supply.
I've always ignored instructions that say to only use that product's USB cord (things like my kitchen scale and water flossed) and have never had an issue. Sounds like I've just gotten lucky though, based on your experience.
I was under the impression that the USB protocol just fell back to 1a 5v when power negotiation was unsure.
USB-C is 0v by default and you have to signal to get anything at all. A lot of junky devices are non compliant and aren’t set up to signal 5v so they get 0 when plugged in to a C-C cable.
With resistors on the CC pins. In particular, there is resistor value that indicates legacy USB charging. This is in the USB-A to USB-C adapters and cables.
The manufacturers cheaped out in not including the right resistors.
I would also guess that some of these cases are designs that were adapted from previous USB mini- or micro-b configurations. Like an intern got the assignment, switched the connector, and wiped hands on pants, not realizing that an electrical change was required as well.
And if you spin the new board and it works with the A->C cable sitting on your desk, then what could possibly be different about plugging it into a C<->C cable, right?
> How does it negotiate with a host-powered device if it's unpowered to begin with?
Through a pair of resistors.
The unpowered device connects each of the two CC pins to the ground pin through a separate resistor of a specific value. The cable connects one of these CC pins to the corresponding pin on the other end of the cable (the second pin on each side is used to power circuitry within the cable itself, if it's a higher-spec cable). On the host side, each of the two CC pins is connected to the power supply through a separate resistor of another specific value. When you plug all this together, you have the host power supply connected to ground through a pair of resistors, which is a simple voltage divider. When the host detects the resulting voltage on one of the CC pins, it knows there's a device on the other end which is not providing power, and it can connect the main power pins of the connector to its power supply.
I have 2 powerbanks that cannot be charged by USB-C port when at 0%. The signaling circuitry simply doesn't work. No idea who designed this. It is silly beyond belief. I have to charge it with normal 5V A-to-C cable for 30 seconds, then unplug, then all the PD stuff will start working and I can fastcharge it with USB-C port again. I'm screwed without A-to-C cable.
Holy shit. This explains why my Anbernic 406v is so weird. If I drain the battery too much, it won't let me charge with anything except with a normal 5v USB A to C cable and the USB-C cable that I use to charge it while it's on does nothing. It makes so much sense now.
This (and the GP) are because your device supports some sort of fast charge USB-PD negotiation, but does not support what is known as “dead battery mode”. Basically, dead battery mode enables those pull down resistors by default (when no power is applied) so you can get 5V to the system, where eventually it would charge up and the chip that can do PD negotiation will be powered. Usually this is done simply by having the negotiation chip default to pull down resistors internally when unpowered.
USB-C hosts and power adapters are only allowed to provide 5V if they can sense a downstream device (either via a network of resistors or via explicit PD negotiation).
Out-of-spec USB-C devices sometimes skip that, and out-of-spec USB-C chargers often (somewhat dangerously) always supply 5V, so the two mistakes sort of cancel out.
Careful. Some of these devices may not be USB-C at all but rather just using the port. If the device calls it USB then it's probably fine to use any cable, but if you just see "Type C", it's safest to assume they don't have it wired up according to any USB standard.
GLP-1s have legitimately changed my life for the better. I've always been very active but have consistently been moderately overweight. A relatively low dose of Semaglitide has helped me lose 40lbs and keep it off. I'm a year and a half in and have had very few side effects, no loss of efficacy, and my muscle mass has increased slightly despite all the negative press about muscle loss. My diet is similar in composition to what it was before, but I probably eat 25% less by volume. Recognizing I'm a sample of one, but my experience is reflected in the research.
I plan on being a GLP-1 for the rest of my life. Perfectly fine with that. It seems like society has more problems with GLP-1s than its users do.
Day to day I use a home body composition scale (Withings Body Scan), the results of which have been corroborated by two Dexascans I've done at my gym a little less than a year apart.
For me personally, the little bit of help in the form of forward progress on weight loss has given me a reason to be a little more methodical in my strength training, and I'm seeing a slow but consistent payoff. And as far as I can tell, I'm not fighting an uphill battle in terms of adding muscle mass at all because of the GLP-1.
reply