Hacker Newsnew | past | comments | ask | show | jobs | submit | ville's commentslogin

According to the FT article:

"Ukrainian officials say Kyiv is now ready to sign the agreement on jointly developing its mineral resources, including oil and gas, after the US dropped demands for a right to $500bn in potential revenue from exploiting the resources."

--

"The final version of the agreement, dated February 24 and seen by the FT, would establish a fund into which Ukraine would contribute 50 per cent of proceeds from the “future monetisation” of state-owned mineral resources, including oil and gas, and associated logistics. The fund would invest in projects in Ukraine.

It excludes mineral resources that already contribute to Ukrainian government coffers, meaning it would not cover the existing activities of Naftogaz or Ukrnafta, Ukraine’s largest gas and oil producers.

However, the agreement omits any reference to US security guarantees which Kyiv had originally insisted on in return for agreeing to the deal. It also leaves crucial questions such as the size of the US stake in the fund and the terms of “joint ownership” deals to be hashed out in follow-up agreements."


Desktop Toys!

I remember playing with this on Windows 95, but it turns out it was made for Windows 3.1 initially.

YouTube video: https://www.youtube.com/watch?v=sesM1fygKDs

Internet archive: https://archive.org/details/desktoptoys_201911


I was using this on XP!

Edit: Noooooo I was playing some other program! It looked slightly better than this. Can't find it now :(


TIL PowerToys was a 3.1 thing. Many thanks.-


USB sticks should work for this purpose. Rewritable, smaller and faster than optical media.


Started doing this when the first USB sticks became available. Even in Windows 95, which didn't support USB at the time.

Interestingly W95 could be trimmed down to about 35mb, and carefully adding Word & Excel from Office97 was about another 65mb, so it ended up fitting if you had one of the huge 128mb USB sticks.

That's about the smallest I can figure you can get a 32-bit Windows office machine which would be "fully compatible" with the latest Windows & Office versions, as long as you were carefully storing your office files on a FAT32 partition and limiting your expectations (like file size and number of XL rows) to those addressed by W95 which was as functional an office machine as millions of people need today.

For those of you who did manage to run W95 at its full 2GHz maximum over 10 years later on much higher speed motherboards than there were in the 1990's, you know what I'm talking about when I say the most noticeable thing is zero latency in almost all human-computer interactions.

You could be doing all kinds of office work, with lots of other things to "boot".

Just got a couple more of the "small" 128GB SATA SSD that are finally cheap enough for bootable OS's to use like "game cartridges" now. Not much different "application", just faster booting and operation than most USB.

Two partitions on each SSD, one for an OS only, one for ALL related storage.

Still have some massive multibooting going on, but with these lttle SSDs the most up-to-date are going to be W11, W10x86, W10x64, Debian, Mint & Fedora.

Fortunately I got a few of the pre-NUC cheap ASUS miniPC that has a simple hatch on top and came with a full size SATA Desktop HDD right there. Gets even better ventilation and has no exposed electronics when the cover is off all the time, remove the HDD (for good now) and just slip in whichever SSD you feel like booting to at the time.

Looks like about 128GB will do what 128mb would do back in the day.


Even in Windows 95, which didn't support USB at the time.

That's because DOS-based Windows could use the BIOS for disk access, and BIOS presented USB drives as hard drives. I believe you can even do the same with an NVMe SSD that has a suitable boot ROM.


Yes, good to emphasize that UEFI or genuine BIOS motherboards will access the USB drives on powerup, then any OS that can boot from that type partition layout can go forward from there. DOS, W9x, NT5 need CSM enabled to boot on a UEFI MB, W7 loves it as well.

W98 would install and run from USB too, as long as USB device drivers did not get installed. That way once booted if you plugged something into a USB socket on the MB, it was "unknown" and remained inaccessible. But if you booted when the second USB device was plugged in beforehand, W9x (or DOS) assigned an alphabetic drive letter and you could access the files.

Sometimes I still use a small FAT32 partition with simple DOS on a Syslinux'ed volume to boot distros from the NT5 bootloader. That way you can edit the Linux multiboot menu in Windows, or even DOS which sure boots a lot faster today.

Now they have USB enclosures for M.2 drives, usually not both NVMe & SATA flexibility though.


And on top of that, the Google datacenter in Hamina doesn't use ocean water with 3.5% salinity, but water from the Baltic sea, which in that part of the Gulf of Finland only has roughly 0.3% salinity – a tenth of ocean water.


Sounds like less of a Baltic sea thing, but more of being in a fjord or bay that's fed by fresh water keeping a lot of the salty sea water out.


But telemetry can be disabled with a single setting?

    "telemetry.telemetryLevel": "off"


Why is it, when people have legitimate complaints about something on hacker news (in this case the fact that what is delivered as a binary is not what you see as source code), people are so quick to side with the tool for being trustable?

How do you know that flag does anything except attempt to hide sending of telemetry? Or not even that, simply logging that you would prefer not to have telemetry taken.

The EULA explicitly forbids you from attempting to reverse engineer the binary, so you’re literally taking this on blind faith.

Don’t you find that troubling?

Even if Microsoft didn’t have a murky track record with telemetry in their operating system (being randomly turned back on or never being completely turned off): the default position should never be to trust.

Trust is earned.


It is very interesting how everyone rants about trust, but they are perfectly happy running their 'open-source' code on closed-source CPUs (Intel, AMD x86-64, Apple M1, etc) with closed-source firmware, and raves over how they 'don't know what a flag does', and how they want or even have 'full control' of their computers and the code that runs on them, which is not remotely true.

At the risk of running into a slippery slope, unless one single-handedly:

- audits the entire codebase for some open-source OS;

- audits the entire specification for an open-source ISA, and an open-source implementation of said ISA, such as RISC-V BOOM;

- locally compiles the audited codebase on the audited CPU, targetting the audited ISA;

one cannot claim to say 'I want to know what that flag does'.

For all we know, Intel might have NSA backdoors and might 'phone home' to some server. I understand the idealism behind 'trust is earned', but at some point, trust has to be given, because unless we are willing to make some serious compromises, we will never be in full control of the complete hardware-software stack.


To be perfectly fair, people are extremely displeased with x86 hardware back doors. So it’s not exactly the same as the way you present it.

Though I agree overall, I’ve read the code (and compiled) the operating system I use day to day, but that seems to be uncommon apparently, and I’m not above just trusting some package maintainer.

That said: trust is still earned, and easily lost.

There’s a lot to indicate lost trust in Microsoft (despite the fact that I did say in my parent comment that it’s separate from the point).


> perfectly happy

That’s an assumption you made, and it’s a wrong one.


I'll be straightforward and say that as a whole, I do not trust an operating system in which one of the first configurable options it presents me are whether I want to toggle my 'advertising ID' on or off.

As a result, quite honestly, I wouldn't trust any product produced by that company.


On the other hand, why are so many so quick to jump on the distrusting side? For example, people on here lost their minds with the Warp terminal having telemetry, but none of it includes any of the terminal input or output, which makes it harmless. But telemetry apparently equals some horrific deed. These exact same software developers likely lace the hell out of their apps and systems with telemetry, a mention of which that is probably buried in some user agreement somewhere.

Microsoft publishes a detailed article on what Visual Studio Code telemetry is, how to disable it, and even how to view the telemetry events going out.

https://code.visualstudio.com/docs/getstarted/telemetry

I'm not really sure jumping through hoops and developing and maintaining a crippled open-source app is the sane, default response here. It's likely a waste of time and a very, very, very small percentage of Visual Studio Code users will use it.


>How do you know that flag does anything except attempt to hide sending of telemetry?

I don't but neither does anyone who runs VSCodium because they also run a random binary from the internet without having any idea whether that binary is in fact compiled with the source code provided, and I have the suspicion nobody who runs it has read that code either.

This is classical security theater where people will run binaries from basically anonymous people on the internet and claim this is more trustworthy than running something provided by Microsoft.


I'm sure we're in a minority but some of us do actually build things like this from source (common for Gentoo, Arch, Guix, and NixOS) and do a quick cursory glance of source changes on every upgrade/rebuild. For a flag like this I may dive into the code of the version I'm running and take a look what's going on.

So with a sample size of one I can tell you that "nobody" is false.

Even not doing that, just the fact that I can know that it's built from a known tag from master on a public high-interaction git repo makes it a completely different story than downloading some arbitrary binary.


> This is classical security theater where people will run binaries from basically anonymous people on the internet and claim this is more trustworthy than running something provided by Microsoft.

With a large enough group of "anonymous people" [0] inspecting the code, the chance for a security hole, intentional or otherwise, lowers [1]. Notice that this is NOT a guarantee by any means -- it's a chance. [2]

Contrast that to a blob of binary code with a EULA stating you aren't allowed to inspect it. There are obviously non-malicious reasons for doing that, but it doesn't (and shouldn't) sow trust. So some people don't trust it. They are not irrational for doing so.

In terms of probability, I would put my money that Microsoft is overall better than the median set of developers at writing code with fewer technical bugs. However, I would also bet that they are more likely to intentionally add in more telemetric data than they let on, and/or misrepresent what toggles and settings actually change.

Whether I actually (can) read even a single line of code does not change any of that. Just the fact that someone can view your code has a large effect on how you write it [3].

We can talk all day about whether specifically VSCodium meets some threshold of actual reviewers/auditors, but that's not the point.

[0]: There are established lines of trust via things like: comment history, other projects, and even other commits. FOSS devs aren't (always) just purely anonymous.

[1]: "Many eyes make all bugs shallow"

[2] This also says nothing like "all FOSS is created equal" or that "projects with thousands of contributors are magically more secure".

[3]: And yes, of course that could mean they just obfuscate it more. But that still takes more time and effort, reducing the chances/number of cases, and increases the chance of detection.


Maybe you. Maybe others.

I don’t though; so this effort is somewhat beneficial. Despite the fact that it’s really lacking in some features.

Though: that is quite telling to be perfectly honest. Some components can’t be replicated without proprietary elements- which indicates that there’s a lot more binary blobs than normal.

I’ll admit to not looking at the code, as I do not currently use vscode.


> Why is it, when people have legitimate complaints about something on hacker news (in this case the fact that what is delivered as a binary is not what you see as source code), people are so quick to side with the tool for being trustable?

Corporate bootlicking.


Assuming that is true which very well might be, you still have to check every extension for telemetry as well.


What possible reason is there for you to trust that that works or will continue working?


https://foundation.mozilla.org/en/privacynotincluded/ is the "*privacy not included" guide from Mozilla. However, the main "creepiness" score is based on user votes, so it's not particularly useful (people rate brands they like highly, regardless of actual security).

https://templates.blakadder.com is a repository of devices flashable with Tasmota (an open firmaware for devices with ESP8266 or ESP32 which are very common chips for wi-fi based IoT devices

https://zigbee.blakadder.com is a repository of Zigbee devices, which don't connect to internet at all by design. You can use them with a Zigbee gateway.


Amazon has built a LoRa transmitter (Amazon Sidewalk) in their Echo and Ring devices since 2017, it seems.

They’re not hiding it though. It’s marketed as something you might want to keep enabled and “help your neighbors” by sharing its location.

- “if your Echo device loses its wifi connection, Sidewalk can simplify reconnecting to your router”

- “customer support can still troubleshoot problems even if your devices lose their wifi connection”

- you “support community extended coverage benefits such as locating pets”


The statement “This update does not expand our ability to share data with Facebook.” seems to be perhaps carefully worded based on a technicality that they, WhatsApp, are not the ones sharing the business conversations with Facebook, but instead Facebook is giving the businesses themselves tools to share them with FB for ad targeting.

From earlier FB announcement (https://about.fb.com/news/2020/10/privacy-matters-whatsapp-b...): “When Facebook acts as a hosting provider to a business, it will use the messages it processes on behalf of and at the instruction of the business. This is an industry standard practice among many companies that offer hosting solutions. While Facebook will not automatically use messages to inform the ads that a user sees, as is always the case, businesses will be able to use chats they receive for their own marketing purposes, which may include advertising on Facebook.”


> “This update does not expand our ability to share data with Facebook.”

Alternatively, the key word could be "expand". If they are already sharing everything they can, there is no further expansion possible.


“This update does not expand our ability to share data with Facebook.

We’re now moving back the date on which people will be asked to review and accept the terms. No one will have their account suspended or deleted on February 8.”

Is there an unbiased analysis somewhere about what actually is changed by the new privacy policy?


The Verge also claims that not much changed from before: https://www.theverge.com/2021/1/15/22233257/whatsapp-privacy...


As much as I dislike Zuck, I think WhatsApp is being honest about what's happening, if you follow the link on that page to https://faq.whatsapp.com/general/security-and-privacy/about-...

I.e. it's more about allowing sharing ad viewership data between the platforms (FB, IG, WA). "But WhatsApp has no ads?", well, I guess in February^W March, Zuck's going to pop up on your phone and say "Surprise, motherfucker!".

On the topic of businesses, I remember booking a domestic plane ticket in Asia, giving the travel agency my phone number, and getting WhatsApp messages from the agency's business account. I guess there were no concerns about data privacy laws there...


> well, I guess in February^W March, Zuck's going to pop up on your phone and say "Surprise, motherfucker!".

This made me chuckle. I could totally see this becoming a thing, but I'm imagining it like Microsoft's Clippy. You're doing your normal thing using an app, when all of a sudden this caricature of The Zuck pops out of the lower corner and screams "I see you are looking at a post from someone in your friends list. Wouldn't you rather look at this <insert random thing> instead?" The harder/faster you swipe to make him go away the more insistent he gets.


Yes, I am confused as well.

Something has to have changed or else they wouldn't need people to accept new terms right?


The original title is slightly better in this regard: “Firefox 79: The safe return of shared memory, new tooling, and platform updates”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: