Hacker Newsnew | past | comments | ask | show | jobs | submit | aappleby's commentslogin

If it looks good, it's correct.

Nope.

I predict we will see compute-in-flash before we see cheap laptops with 128+ gigs of ram.

There was a company that did compute-in-dram, which was recently acquired by Qualcomm: https://www.emergentmind.com/topics/upmem-pim-system

You could get 128gb ram laptops from the time ddr4 came around: workstation class laptops with 4 ram slots would happily take 128gb of memory.

The fact that nowadays there are little to no laptops with 4 ran slots is entirely artificial.


I was mussing this summer if I should get a refurbed Thinkpad P16 with 96GB of RAM to run VMs purely in memory. Now that 96GB of ram cost as much as a second P16.

I feel you, so much. I was thinking of getting a second 64gb node for my homelab and i thought i’d save those money… now the ram alone cost as much as the node, and I’m crying.

Lesson learned: you should always listen to that voice inside your head that say: “but i need it…” lol


I rebuilt a workstation after a failed motherboard a year ago. I was not very excited about being forced to replace it on a days notice and cheaped out on the RAM (only got 32GB). This is like the third or fourth time I've taught myself the lesson to not pinch pennies when buying equipment/infrastructure assets. It's the second time the lesson was about RAM, so clearly I'm a slow learner.

I can't tell if this is optimism for compute-in-flash or pessimism with how RAM has been going lately!

The thing that is supposed to happen next is high-bandwidth flash. In theory, it could allow laptops to run the larger models without being extortionately costly, by loading directly from flash into the GPU (not by executing in flash) But I haven't seen figures of the actual bandwidth yet, and no doubt to start with it will be expensive. The underlying technology of flash has much higher read latency than dram, so it's not really clear (to me, at least) if they can deliver the speeds needed to remove the need to cache in VRAM just by increasing parallelism.

We’ve had “compute in flash” for a few years now: https://mythic.ai/product/

Yeah especially since what is happening in the memory market

Feast and famine.

In three years we will be swimming in more ram than we know what to do with.


Kind of feel that's already the case today... 4GB I find is still plenty for even business workloads.

Video games have driven the need for hardware more than office work. Sadly games are already being scaled back and more time is being spent on optimization instead of content since consumers can't be expected to have the kind of RAM available they normally would and everyone will be forced to make do with whatever RAM they have for a long time.

That might not be the case. The kind of memory that will flood the second-hand market could not be the kind of memory we can stuff in laptops or even desktop systems.

Memristors are (IME) missing from the news. They promised to act as both persistent storage and fast RAM.

If only memristors weren't vaporware that has "shown promise" for 3 decades now and went nowhere.

By "we" do you mean consumers? No, "we" will get neither. This is unexpected, irresistable opportunity to create a new class, by controlling the technology that people are required and are desiring to use (large genAI) with a comprehensive moat — financial, legislative and technological. Why make affordable devices that enable at least partial autonomy? Of course the focus will be on better remote operation (networking, on-device secure computation, advancing narrative that equates local computation with extremism and sociopathy).

Push Washington to grill the foundries and their customers. Repeat until prices drop.

Can confirm, have used bubble sort for incrementally sorting particles in a particle system and plants in a terrain renderer.


I can't dynamically resize my desktop to any arbitrary resolution in Wayland, whereas under X11+Nvidia it's just "nvidia-settings --assign CurrentMetaMode=..."

This is mandatory for me as I'm constantly connecting into my workstation from other devices via Moonlight or Chrome Remote Desktop and I want the streaming resolution to match the resolution of my client.



    swaymsg -- output eDP-1 mode --custom 1900x725
Setting arbitrary custom resolutions on the fly works fine on Wayland for me.


surprisingly lacking in Gnome


I've used https://github.com/maxwellainatchi/gnome-randr-rust for this for Sunshine sessions in Gnome.


Gnome considers features a bug, so not at all surprising.


no audio sample on the webpage?


Your "bunch of projectiles" is what, 20? You should be able to do 20,000 on even the lowliest integrated GPU these days.

As a senior game/graphics programmer looking at the screen caps, I see a game that should never be using more than 1% of the CPU and a few percent of the GPU.


> You should be able to do 20,000 on even the lowliest integrated GPU these days.

It all depends on how things are organized. If each object needs to run logic every frame, you can start to run into severe performance issues way before you get to 10k. 60fps means you have 16 milliseconds to check everything in the scene. This can be a lot of time but you only have to make one small mistake to lose any advantage.

We have to reach for data oriented techniques like ECS if we want to keep the CPU ~idle while managing 10k+ entities in the scene. This stuff is a big pain in the ass to use if you don't actually need it.


Would be more interesting if there wasn't a doorslam after zooming in for three seconds.


The subject would need to be a lot more interesting, for me to consider paying.


Definitely the first time I've been asked to buy an expiring access pass just to zoom in on an image.

I get that this is cool and it was difficult to make, but there is zero chance I would purchase without knowing how far I can actually zoom in. Maybe allow people one full zoom before the paywall?


The bulk of the issues mentioned in the article seem to fall into the "Why doesn't the BBC give equal time to both objective reality and my bullshit right-wing talking points?" category.

While I agree that the BBC did some shit that a news organization shouldn't have done, disproportionate coverage does not by itself imply bias - thinking otherwise is classic both-sides-ism.


So basically: you're equally biased and that means the BBC bias is fine.


This is more easily summarized as "Shitty people are never shitty in just one way".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: