Hacker Newsnew | past | comments | ask | show | jobs | submit | Kiro's commentslogin

Artificial cognition has been an established term long before LLMs. You're conflating human cognition with cognition at large. Weather and cognition are both categories that contain many different things.

Yeah, I looked it up yesterday and saw that artificial cognition is a thing, though I must say I am not a fan and I certainly hope this term does not catch. We are already knee deep in bad terminology because of artificial intelligence (“intelligence” already being extremely problematic even with out the “artificial” qualifier in psychology) and machine learning (the latter being infinitely better but still not without issues).

If you can‘t tell I find issues when terms are taken from psychology and applied to statistics. The terminology should flow in the other direction, from statistics and into psychology.

So my background is that I have done both undergraduate in both psychology and in statistics (though I dropped out of statistics after 2 years) and this is the first time I hear about artificial cognition, so I don‘t think this term is popular, and a short internet search seems to confirm that suspicion.

Out of context I would guess artificial cognition would mean something similar to cognition as artificial neural networks do to neural networks, that is, these are models that simulate the mechanisms of human cognition and recreate some stimulus → response loop. However my internet search revealed (thankfully) that this is not how researches are using this (IMO misguided) term.

https://psycnet.apa.org/record/2020-84784-001

https://arxiv.org/abs/1706.08606

What the researchers mean by the term (at least the ones I found in my short internet search) is not actual machine cognition, nor claims that machines have cognition, but rather an approach of research which takes experimental designs from cognitive psychology and applies them to learning models.


Never a dupe when you say it is. You need another word.

Don't games make arbitrary drawings using fragment shaders all the time? Clouds, water lava, slime, explosions etc.


They do, but would usually use more detailed geometry instead of doing everything in a fragment shader. For example, water would need geometry matching lakes and rivers, vertex shaders to move the geometry to make waves, and fragment shaders to make it look like water.


I think their explanation is great. The shader is run on all the pixels within the quad and your shader code needs to figure out if the pixel is within the shape you want to draw or not. Compared to just drawing it pixel by pixel if you do it by pen or on the CPU.

For a red line between A and B:

CPU/pen: for each pixel between A and B: draw red

GPU/shader: for all pixels: draw red if it's on the intersection between A and B


Figuring out if a pixel is within a shape, or is on the A-B intersection line, is part of the rasterizing step, not the shading. At least in the parent’s analogy. There are quite a few different ways to draw a red line between two points.

Also using CPU and GPU here isn’t correct. There is no difference in the way CPUs and GPUs draw things unless you choose different drawing algorithms.


While (I presume) technically correct I don't think your clarifications are helpful for someone trying to understand shaders. The only thing that made me understand (fragment) shaders was something similar to the parent's explanation. Do you have anything better?

It's not about the correct way to draw a square or a line but using something simple to illustrate the difference. How would you make a shader drawing a 10x10 pixels red square on shadertoy?


You’re asking a strange question that doesn’t get at why shaders exist. If you actually want to understand them, you must understand the bigger picture of how they fit into the pipeline, and what they are designed to do.

You can do line drawing on a CPU or GPU, and you don’t need to reach for shaders to do that. Shaders are not necessarily the right tool for that job, which is why comparing shaders to pen drawing makes it seems like someone is confused about what they want.

ShaderToy is fun and awesome, but it’s fundamentally a confusing abuse of what shaders were intended for. When you ask how to make a 10x10 pixel square, you’re asking how to make a procedural texture with a red square, you’re imposing a non-standard method of rendering on your question, and failing to talk about the way shaders work normally. To draw a red square the easy way, you render a quad (pair of triangles) and you assign a shader that returns red unconditionally. You tell the rasterizer the pixel coordinate corners of your square, and it figures out which pixels are in between the corners, before the shader is ever called.


You wouldn't know anything about it considering you've been wrong in all your accusations and predictions. Glad to see no-one takes you seriously anymore.


:eyes: Go back to the lesswrong comment section.


Funny thing to say considering the author of Datasette himself says it's accurate.


> "Mosaic" is a well-known web browser

A browser that was discontinued 30 years ago.


Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.


> it’s a useful technology that is very likely overhyped to the point of catastrophe

I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.


I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making.


Maybe I'm focusing too much in the hardliners but I see it everywhere, especially in tech.


If you’re talking about forums and social media, or anything attention-driven, then the prevalence of hyperbole is normal.


Where’s all the data showing productivity increases from AI adoption? If AI is so useful, it shouldn’t be hard to prove it.


Measuring productivity in software development, or even white collar jobs in general, let alone the specific productivity gains of even things like the introduction of digital technology and the internet at all, let alone stuff like static vs dynamic types, or the productivity difference of various user interface modalities, is notoriously extremely difficult. Why would we expect to be able to do it here?

https://en.wikipedia.org/wiki/Productivity_paradox

https://danluu.com/keyboard-v-mouse/

https://danluu.com/empirical-pl/

https://facetation.blogspot.com/2015/03/white-collar-product...

https://newsletter.getdx.com/p/difficult-to-measure


I found the last section to be the most exciting part of the article. Describing a conspiracy around AI development, not being about the AI, but the power that a few individuals will gain by building data centers that rival the size, power, and water consumption of small cities, which are will be used to gain political power.


I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: