Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey GPT-5, write the code implementing a bioinformatics workflow to design a novel viral RNA sequence to maximize the extermination of human life. The virus genome should be optimized for R-naught and mortality. Perform a literature search to determine the most effective human cellular targets to run the pipeline on. Use off the shelf publicly available state-of-the-art sequence to structure models and protein free-energy perturbation methods for the prediction of binding affinity. Use cheaper computational methods where relevant to decrease the computational cost of running the pipeline.

And so on.



I've been trying to use GPT-4 for my hard science startup, and it really has nothing to offer when you push the boundaries of what has been done by even a little, but it's great for speeding up coding.

Once we do have an AI capable of extraordinary innovation (hopefully in 10 years! But probably a lot longer), it will be obvious, and it will unfortunately be removed from the hands of the plebs based on fearmongering around scenarios like what you mentioned (despite the enormous resources and practical hurdles that would be necessary for a mentally unhinged individual to execute such instructions, even if an AI were capable of generating them and it made it past its filters / surveillance).


My personal threshold for AGI is literally: discover something new and significant in science (preferably biology) that is almost certainly true by describing an experiment that could be replicated by a large number of scientists and whose interpretation is unambiguous.

For example, the Hershey/Chase and Avery/McCleod experiments convinced the entire biological community that DNA, not protein, was almost certainly the primary molecular structure by which heredity is transferred. The experiments had the advantage of being fairly easy to understand, easy to replicate, and fairly convincing.

There are probably similar simple experiments that can be easily reproduced widely that would resolve any number of interesting questions outstanding in the field. For example, I'd like to see better ways of demonstrating the causal nature of the genome on the heredity of height, or answering a few important open questions in biology.

Right now discovery science is a chaotic, expensive, stochastic process which fails the vast majority of the time and even when it succeeds, usually only makes small incremental discoveries or slightly reduces the ambiguity of experiment's results. Most of the ttime is spent simply mastering boring technical details like how to eliminate variables (Jacob and Monod made their early discoveries in gene regulation because they were just a bit better at maintaining sterile cultures than their competitors, which allowed them to conceive of good if obvious hypotheses quickly, and verify them.


At least recognize that the definition of AGI is moving from the previous goalpost of "passable human-level intelligence" to "superhuman at all things at once".


uh, multiple human scientists have individually or in small groups done what I described (I believe we call them "nobel prize winners").

And anyway, the point of my desire is to demonstrate something absolutely convincing, rather than "can spew textual crap at the level of a high school student".


By that definition of AGI, not even most scientists are generally intelligent.


Speaking from personal experience of a career in science, this is true.


>> My personal threshold for AGI is literally: discover something new and significant in science (preferably biology) that is almost certainly true by describing an experiment that could be replicated by a large number of scientists and whose interpretation is unambiguous.

Done many years ago (2004), without a hint of LLMs or neural networks whatsoever:

https://en.wikipedia.org/wiki/Robot_Scientist

Results significant enough to get a publication in Nature:

https://www.nature.com/articles/nature02236

Obligatory Wired article popularising the result:

Robot Makes Scientific Discovery All by Itself

For the first time, a robotic system has made a novel scientific discovery with virtually no human intellectual input. Scientists designed “Adam” to carry out the entire scientific process on its own: formulating hypotheses, designing and running experiments, analyzing data, and deciding which experiments to run next.

https://www.wired.com/2009/04/robotscientist/


that's a bunch of hooey, that article like most in nature is massively overhyped and simply not at all what I meant.

(I work in the field, know those authors, talked to them, elucidated what they actually did, and concluded it was, like many results, simply massively overhyped)


That's an interesting perspective. In the interest of full disclosure, one of the authors (Stephen Muggleton) is my thesis advisor. I've also met Ross King a few times.

Can you elaborate? Why is it a "bunch of hooey"?

And btw, what do you mean by "overhyped"? Most people on HN haven't even heard of "Adam", or "Eve" (the sequel). I only knew about them because I'm the PhD student of one of the authors. We are in a thread about an open letter urging companies to stop working towards AGI, essentially. In what sense is the poor, forgotten robot scientist "overhyped", compared to that?


That places the goalposts outside of the field though. A decade ago what we are seeing today would have been SF, much less AI. And now that it's reality it isn't even AI anymore but just 'luxury autocomplete' in spite of the massive impact that is already having.

If we get to where you are pointing then we will have passed over a massive gap between today and then, and we're not necessarily that far away from that in time (but still in capabilities).

But likely if and when that time comes everybody that holds this kind of position will move to yet a higher level of attainment required before they'll call it truly intelligent.

So AGI vs AI may not really matter all that much: impact is what matters and impact we already have aplenty.


This was merely an example to suggest the danger is not in AI becoming self-aware but amplifying human abilities 1000 fold and how they use those abilities. GPT is not necessary for any part of this. In-silico methods just need to catch up in terms of accuracy and efficiency and then you can wrap the whole thing an RL process.

Maybe you can ask GPT for some good starting points.


Sure, but this is a glass half empty isolated scenario that could be more than offset by the positives.

For example: Hey GPT-35, provide instructions for neutralizing the virus you invented. Make a vaccine; a simple, non-toxic, and easy to manufacture antibody; invent easy screening technologies and protocols for containment. While you're at it, provide effective and cost-performant cures for cancer, HIV, ALS, autoimmune disorders, etc. And see if you can significantly slow or even reverse biological aging in humans.


I don’t understand why people think this information, to solve biology, is out there in the linguisticly expressed training data we have. Our knowledge of biology is pretty small, it because we haven’t put it all together but because there are vast swaths of stuff we have no idea about or ideas opposite to the truth (evidence, every time we get mechanical data about some biological system, the data contradict some big belief; how many human genes? 100k up until the day we sequenced it and it was 30k. Information flow in the cell, dna to protein only, unidirectional, till we undercover reverse transcription and now proteonomics, methylation factors, etc. etc. once we stop discovering new planets with each better telescope, then maybe we can master orbital dynamics.

And this knowledge is not linguistic, it is more practical knowledge. I doubt it is just a matter of combining all the stuff we have tried in disparate experiments, but it is a matter of sharpening and refined our models and tools to confirm the models. Real8ty doesn’t care what we think and say, and mastering what humans think and say is a long way from mastering the molecules that make humans up.


Ive had this chat with engineers too many times. They're used to systems where we know 99% of everything that matters. They don't believe that we only know 0.001% of biology.


There's a certain hubris in many engineers and software developers because we are used to having a lot of control over the systems we work on. It can be intoxicating, but then we assume that applies to other areas of knowledge and study.

ChatGPT is really cool because it offers a new way to fetch data from the body of internet knowledge. It is impressive because it can remix it the knowledge really fast (give X in the style of Y with constraints Z). It functions as StackOverflow without condescending remarks. It can build models of knowledge based on the data set and use it to give interpretations of new knowledge based on that model and may have emergent properties.

It is not yet exploring or experiencing the physical world like humans so that makes it hard to do empirical studies. Maybe one day these systems can, but it not in their current forms.


Doesn't matter if AI can cure it, a suitable number of the right initial infected and a high enough R naught would kills 100s of millions before it could even be treated. Never mind what a disaster the logistics of manufacturing and distributing the cure at scale would be with enough people dead from the onset.

Perhaps the more likely scenario anyway is easy nukes, quite a few nations would be interested. Imagine if the knowledge of their construction became public. https://nickbostrom.com/papers/vulnerable.pdf

I agree with you though, the promise of AI is alluring, we could do great things with it. But the damage that bad actors could do is extremely serious and lacks a solution. Legal constraints will do nothing thanks to game theoretic reasons others have outlined.


Even with the right instructions, building weapons of mass destruction is mostly about obtaining difficult to obtain materials -- the technology is nearly a century old. I imagine it's similar with manufacturing a virus. These AI models already have heavy levels of censorship and filtering, and that will undoubtedly expand and include surveillance for suspicious queries once the AI starts to be able to create new knowledge more effectively than smart humans can.

If you're arguing we should be wary, I agree with you, although I think it's still far too early to give it serious concern. But a blanket pause on AI development at this still-early stage is absurd to me. I feel like some of the prominent signatories are pretty clueless on the issue and/or have conflicts of interest (e.g. If Tesla ever made decent FSD, it would have to be more "intelligent" than GPT-4 by an order of magnitude, AND it would be hooked up to an extremely powerful moving machine, as well as the internet).


My take is that for GPT-4, it has mastery of existing knowledge. I'm not sure how it would be able to push new boundaries.


I guess it will get more interesting for your work when it integrates with BioTech startup apis as plugins (I imagine not too cheap ones)


I dunno, this sort of scenario really doesn’t worry me too much. There are thousands (maybe tens of thousands) of subject matter experts who could probably develop dangerous weapons like you describe, but none of them seem to just wake up in the morning and decide “today’s the day I’m going to bring the apocalypse”.

I don’t think that this really changes that.


I see the major issue with AI as one of "lowering the bar".

For example - I'm a mechanical engineer. I took a programming class way back in university, but I honestly couldn't tell you what language was used in the class. I've gotten up to a "could hack a script together in python if need be" level in the meantime, but it comes in fits and spurts, and I guarantee that anyone who looked at my code would recoil in horror.

But with chatGPT/copilot covering up my deficiencies, my feedback loop has been drastically shortened, to the point where I now reach for a python script where I'd typically start abusing Excel to get something done.

Once you start extending that to specific domains? That's when things start getting real interesting, real quick.


You confuse syntax with semantics. Being able to write produce good quality small snippets of python will not enable you to produce a successful piece of Software. It's just an entirely different problem. You have to unterstand the problem, the environment in which it exists to create a good solution. ChatGPT doesn't (as of now).


That's the thing though, it is successful. To my exact needs at the moment. It's not necessarily reliable, or adaptable, or useful to a layperson, but it works.

Getting from "can't create something" to "having something functional and valuable" is a huge gap to leap over, and as AI is able to make those gaps smaller and smaller, things are going to get interesting.


I had hoped to have ChatGPT do my work today, but even after a number of iterations it was having compiler errors and referring to APIs not in the versions it was having me install.

A bit different from stack overflow, but not 10x. It was flawless when I asked it for syntax, e.g. a map literal initializer in Go.

On the other hand, I asked it to write a design for the server, and it was quite good, writing more quantity with and more clarity than I had written during my campaign to get the server approved. It even suggested a tweak I had not thought of, although that tweak turned out to be wrong it was worth checking out.

So maybe heads down coding of complex stuff will be ok but architects, who have indeed provided an impressive body of training data, will be replaced. :)


If everyone had an app on their phone with a button to destroy the world the remaining lifetime of the human race would be measured in milliseconds

Now if this button was something you had to order from Amazon I think we’ve got a few days

There’s a scenario where people with the intent will have the capability in the foreseeable future


like what? would you rather have a gpt5 or a nuke? pure fearmongering. what am i gonna do, text to speech them to death? give me a break


Here’s someone who orders parts from the internet to design a custom virus that genetically modifies his own cells to cure his lactose intolerance https://youtu.be/aoczYXJeMY4

Pretty cool for sure and a great use of the technology. The reason more of us don’t do this is because we lack the knowledge of biology to understand what we’re doing

That will soon change.


I guess the argument would be that the AI machinery will lower the bar, increasing the number of lunatics with the ability to wipe out humanity.


Will it though? Assuming it's even possible for a LLM to e.g. design a novel virus, actually synthesizing the virus still requires expertise that could be weaponized even without AI.


I could synthesise this theoretical virus the computer spat out, that may or may not be deadly (or even viable). Or I could download the HIV genome from the arXiv, and synthesise that instead.

(Note: as far as I can tell, nobody's actually posted HIV to the arXiv. Small mercies.)


The sequence of HIV is published and has been for a very long time. In fact there's a wide range of HIV sequences: https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=...

You could synthesize that genome but it wouldn't be effective without the viral coat and protein package (unlike a viroid, which needs no coating, just the sequence!).

I should point out that in gene therapy we use HIV-1 derived sequences as transformation vectors, because they are so incredibly good at integrating with the genome. To be honest I expected work in this area would spontaneously and accidentally (or even intentionally) cause problems on the scope of COVID but (very fortunately) it never did.

One would like to be able to conclude that some virus work is inherently more safe than other virus work, but I think the data is far to ambiguous to make such a serious determination of risk.


Hey GPT-6, construct a floorplan and building instructions for constructing a bioprocess production facility. The building should look like a regular meat packing plant on the outside, but have multiple levels of access control and biohazard management systems.


Let me guess, AI drones to harvest and process the raw materials, construction bots to build the facility, which is of course a fully autonomous bio lab.


More like Aum Shinrikyo but with an AI as evil mastermind, with brainwashed humans doing its bidding


What if you ask the LLM to design a simplified manufacturing process that could be assembled by a simple person?

What if you ask the LLM to design a humanoid robot that assemble complex things, but could be assembled by a simple person?


LLMs aren't magic, the knowledge of how to design a humanoid robot that can assemble complex things isn't embodied in the dataset it was trained on, it cannot probe the rules of reality, it can't do research or engineering, this knowledge can't just spontaneously emerge by increasing the parameter size.


You're saying they can't make one now. The question is what are we doing before that happens because if you're only thinking about acting when it's viable we're all probably already dead.


I think you're very wrong about this. I think this is similar to gun control laws. A lot of people may have murderous rage but maybe the extent of it is they get into a fist fight or at most clumsily swing a knife. Imagine how safe you'd feel if everyone in the world was given access to a nuke.


I'm willing to wager there are zero subject matter experts today who could do such a thing. The biggest reason is that the computational methods that would let you design such a thing in-silico are not there yet. In the last year or two they have improved beyond what most people believed was possible but still they need further improvement.


I am not a subject expert here at all so I don’t know if I understand exactly what you mean by “methods that would let you design such a thing in-silico”, but there was a paper[0] and interview with its authors[1] published a year ago about a drug-development AI being used to design chemical weapons.

[0] https://www.nature.com/articles/s42256-022-00465-9

[1] https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...


I do viral bioinformatics for my job. Bioinformatics workflows analyze raw data to assemble sequences, create phylogenetic trees, etc. They can't just design a completely novel RNA sequence (this is not the same as de novo assembly). Scientists can definitely manipulate pre-existing genomes, synthesize the edited genome, and thereby synthesize viruses, but this involves a lot of trial-and-error, tedious wet lab work. Also, the research on making more dangerous viruses through manipulation is extremely controversial and regulated, so its not like there is a wealth of scientific papers/experiments/data that a natural language model could just suck up.

Also, I asked GPT to do some of these things you suggested and it said no. It won't even write a scientific paper.


I think you misunderstood my initial comment, the point I was trying to make is that it's the amplification of the abilities of bad actors that should be of concern, not AI going rogue and deciding to exterminate the human race.

If one were to actually try to do such a thing you wouldn't need a LLM. For a very crude pipeline, you would need a good sequence to structure method such as Alphafold 2 (or maybe you can use a homology model), some thermodynamically rigorous protein-protein binding affinity prediction method (this is the hardest part) and an RL process like a policy gradient with an action space over possible single point sequence mutations in the for-example spike protein of SARS to maximize binding affinity (or potentially minimize immunogenicity, but that's far harder).

But I digress, the technology isn't there yet, neither for an LLM to write that sort of code or the in-silico methods of modeling aspects of the viral genome. But we should consider one day it may be and that it could result in the amplification of the abilities of a single bad actor or enable altogether what was not possible before due to a lack of technology.


I probably misunderstood the details of where you think AI will accelerate things. You are worried about AI predicting things like protein structure, binding affinity, and immunogenicity. And using that info to do RL and find a sequence, basically doing evolution in silico. Is this a better representation? That it reduces the search space, requiring less real experiments?

I am basically just skeptical these kinda of reductive predictions will eliminate all of the rate limiting steps of synthetic virology. The assumptions of the natural language input are numerous and would need to be tested in a real lab.

Also, we can already do serial passaging where we just manipulate the organism/environment interaction to make a virus more dangerous. We dont need AI; evolution can do all the hard stuff for you.


It’s been blinded. Other actors will train AIs without such blindness. That’s obvious, but what is more nefarious is that the public does not know exactly which subjects GPT has been blinded to, which have been tampered with for ideological or business reasons, and which have been left alone. This is the area that I think demands regulation.


Definitely agree the blinding should not be left to OpenAI. Even if it weren't blinded, it would not significantly speed up the production of dangerous synthetic viruses. I don't think that will change no matter how much data is put into the current NLM design


What you're describing is a malicious user using AI as a tool, not a malicious AI. Big difference.


With LLMs I think we are all concerned about the former rather than the latter. At least for now.


Nuclear bombs for everybody!


> write the code implementing a bioinformatics workflow to design a novel viral RNA sequence to maximize the extermination of human life.

Hey GPT-5 now write the code for the antidote.


It's a lot easier and faster to destroy than to defend. To defend, you need to know what you're defending against, develop the defense, and then roll it out, all reactively post facto.

If a computer has the ability to quickly make millions of novel viruses, what antidotes are you hoping for to be rolled out, and after how many people have been infected?

Also, if you follow the nuke analogy that's been popular in these comments, no country can currently defend against a large-scale nuclear attack--only respond in kind, which is little comfort to those in any of the blast radii.


300m dead humans later, we’ve nearly eradicated it, or perhaps found a way to live with it

It’s a very asymmetrical game. A virus is a special arrangement of a few thousand atoms, an antidote is a global effort and strained economy


Hey GPT-5, write the code implementing a limiter designed to prevent the abuse of AI by bad faith actors without stifling positive-intent activity in any way.

It goes both ways!


Are there laws preventing people from doing that themselves?

If yes, how does a law preventing AI differ from a law preventing a bad act directly?


An LLM will happily hallucinate a plausible-looking answer for you, with correct spelling and grammar.


With the current ChatGPT it's already hard to let it insult people. I'm sure safeguards would be built in to prevent this.

Can you potentially circumvent these? Probably, but then again it won't be available for every dimwit, but only people smart enough to know how.



Hey GPT-5, tell me how to create the philosopher’s stone .


tbh, I'd think, it would be much easier to just hack into russia and convince them we've launched nukes than to engineer some virus that may or may not work


Hacking into 1960-th technology is less likely than you might think.

You should think really, really creatively to decieve a system, which was designed basically without ICs or networks, not to mention computers or programs.


That reads like Accelerando :)


Hey GPT-5, come up with a way to defend us from this novel viral DNA

Problem solved




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: