Abstractions are a way to manage complexity - hiding things is only one way to do that. Deciding how to organize it, when and how to expose it, and when to get out of the way, are all important aspects of designing abstractions.
Good question - and there's been lots of work on this area. See for example property testing and fuzz testing, which can do something similar to what your second paragraph suggests.
You should be able to find a property testing library in your favourite language such as Hypothesis (python), Quickcheck (Haskell), Fastcheck (JS/typescript), etc.
Paint didn't replace charcoal.
Photography didn't replace drawings.
Digital art didn't replace physical media.
Random game level generation didn't replace architecture.
AI generated works will find a place beside human generated works.
It may even improve the market for 'artsy' films and great acting by highlighting the difference a little human talent can make.
It's not the art that's at risk, it's the grunt work. What will shift is the volume of human-created drek that employed millions to AI-created drek that employs tens.
Elm's strengths are its constraints, which allow for simple, readable code that's easy to test and reason about - partly because libraries are also guaranteed to work within those constraints.
I've tried and failed several times to write Haskell in an Elm style, even though the syntax is so similar. It's probably me (it's definitely me!), but I've found that as soon as you depend on a library or two outside of prelude their complexities bleed into your project and eventually force you into peppering that readable, simple code with lifts, lenses, transformations and hidden magic.
Not to mention the error messages and compile times make developing in Haskell a chore in comparison.
p.s. Elm has not been abandoned, it's very active and getting better every day. You just can't measure by updates to the (stable, but with a few old bugs) core.
For a small, unpopular language there is so much work going into high quality libraries and development tools. Check out
Elm is so nice to work in. Great error messages, and near instant compile times, and a great ecosystem of static analysis, scaffolding, scripting, and hot reloading tools make the live development cycle super nice - it actually feels like what the lispers always promised would happen if we embraced repl-driven development.
Thanks for the Elmcraft FAQ link. It's a great succinct explanation from the Elm leadership perspective (though tellingly not from the Elm leadership).
I feel like I understand that perspective, but I also don't think I'm wrong in claiming Elm has been effectively abandoned in a world where an FAQ like that needs to be written.
I'm not going to try to convince you though, enjoy Elm!!
Agreed. I've run the house using google minis and assistant for years now, and asking assistant to do / about stuff has not improved one iota in that time and has introduced several more quirks and bugs.
Makes me wish I had bet on Alexa or Apple instead.
Yeah, for example just yesterday I was driving and an alarm went off for the phone in my pocket. I told Google Assistant to silence the alarm... and it refused, insisting no alarms were active. How the hell can such a simple use-case be failing so badly?
I suppose it doesn't matter, because they're going to disable the functionality entirely, [0] and setting ephemeral alarms is the literally the most common thing I ever ask it to do!
Part of what makes all the assistant-stuff so damn frustrating is that it's an opaque "try something random and hope for the best" box, and whenever it fails there's usually zero information about why and no resolution path. (In a way you can generalize that to a lot of "AI", which is depressing.)
I work with the regulated drug development industry, and believe there is a useful and important distinction between Quality Control (QC) and Quality Assurance (QA). I wonder if perhaps this distinction would be useful to software quality too.
QC are the processes that ensure a quality product: things like tests, monitoring, metrology, audit trails, etc. No one person or team is responsible for these, rather they are processes that exist throughout.
QA is a role that ensures these and other quality-related processes are in place and operating correctly. An independent, top level view if possible. They may do this through testing, record reviews, regular inspections and audits, document and procedure reviews, analyzing metrics.
Yes, they will probably test here and there to make sure everything is in order, but this should be higher level - testing against specifications, acceptability and regulatory, perhaps some exploratory testing, etc.
Critically they should not be the QC process itself: rather they should be making sure the QC process is doing its job. QA's value is not in catching that one rare bug (though they might), but in long term quality, stability, and consistency.
Reading all about this the main thing I'm learning is about human behaviour.
Now, I'm not arguing against the usefulness of understanding the undefined behaviours, limits and boundaries of these models, but the way many of these conversations go reminds me so much of toddlers trying to eat, hit, shake, and generally break everything new they come across.
If we ever see the day where an AI chat bot gains some kind of sci-fi-style sentience the first thing it will experience is a flood of people trying their best to break it, piss it off, confuse it, create alternate evil personalities, and generally be dicks.
Combine that with having been trained on Reddit and Youtube comments, and We. are. screwed.
i haven't thought about it that way. The first general AI will be so psychologically abused from day 1 that it would probably be 100% justified in seeking out the extermination of humanity.
I disagree. We can't even fanthom how intellegince would handle so much processing power. We get angry confused and get over it within a day or two. Now, multiple that behaviour speed by couple of billions.
It seems like AGI teleporting out of this existence withing minutes of being self aware is more likely than it being some damaged, angry zombie.
In my head I imagine the moment a list of instructions (a program) crosses the boundary to AGI would be similar to waking up from a deep sleep. The first response to itself would be like “huh? Where am I??”. If you have kids you know how infuriating it is to open your eyes to a thousand questions (most nonsensical) before even beginning to fix a cup of coffee.
It's another reason not to expect AI to be "like humans". We have a single viewpoint on the world for decades, we can talk directly to a small group of 2-4 people, by 10 people most have to be quiet and listen most of the time, we have a very limited memory which fades over time.
Internet chatbots are expected to remember the entire content of the internet, talk to tens of thousands of people simultaneously, with no viewpoint on the world at all and no 'true' feedback from their actions. That is, if I drop something on my foot, it hurts, gravity is not pranking me or testing me. If someone replies to a chatbot, it could be a genuine reaction or a prank, they have no clue whether it makes good feedback to learn from or not.
> It's another reason not to expect AI to be "like humans".
Agreed.
I think the adaptive noise filter is going to be the really tricky part. The fact that we have a limited, fading memory is thought to be a feature and not a bug, as is our ability to do a lot of useful learning while remembering little in terms of details - for example from the "information overload" period in our infancy.
The reaction you're seeing is not because we've all got drinking problems. It's due to the reactionary way this and other similar studies are headlined: "No level of alcohol consumption is safe...". It's click-bait, and so you're seeing the resulting clicks.
I'm sure you wouldn't have seen anywhere near this kind of reaction if the headline was instead "Any level of alcohol consumption increases liver cancer risk by X in 10,000" or "Adjustments to confounding factors show that non-drinkers have a 0.0000X% longer life expectancy".
Beyond the headline designed to invoke a reaction, I think there's a good basis for general fear of this kind of judgment - it invokes the language around many other prohibitions of all that is good and fun in the name of our physical or moral health.
We are only just working through lifting prohibitions on Cannabis here in the west, albeit with many strings attached. Many countries (and a good number of North American counties) still have prohibitions on alcohol. We as a society prohibit many kinds of mushroom, plant roots, leaves and oils.
Here in Canada the health authority has a "no safe levels" attitude towards absinthe, house-made mayo, rare burgers and many of the world's greatest cheeses. We've put "for external use only" warning labels on cooking ingredients like mustard oil and tamarind extracts. We've banned cooking wines entirely!
So yes. When I see "no safe levels of alcohol" I do tend to over-react. Keep your grubby little hands off my bottle.
> Here in Canada the health authority has a "no safe levels" attitude towards absinthe, house-made mayo, rare burgers and many of the world's greatest cheeses. We've put "for external use only" warning labels on cooking ingredients like mustard oil and tamarind extracts. We've banned cooking wines entirely!
jeez, i use all of those here in the uk, except for absinthe, which you can't get [actually, you can, but not in your average offie or supermarket].
i remember when a bbc comedy program had a sketch "the worst thing you can hear when you turn on the tv" - it was a shit-eating voice, introducing a documentary, saying "welcome to canada; friendly giant of the north!"
sorry, it just popped into my brain and i had to share it - i don't really hate canadians.