AI & Chemistry

Malicious AI use cases?

Shoeblack.AI 2024. 11. 22. 18:57

I recently read a book called 'The Alchemy of Air.' It's about two scientists who won the Nobel Prize in Chemistry for making fertilizer from air. Fertilizer from air? There is a huge amount of nitrogen in the air. Even the gas in a bag of chips is nitrogen. The nitrogen that remains in the air is the main ingredient in fertilizers. The book begins with a speech by a scientist. He warns that a great danger is approaching because the population is growing and food production can't catch up population increase. The raw materials for fertilizer production were being depleted. Becuase of that, scientists have been working to make fertilizer from nitrogen, which is abundant in the air. This technology solved the shortage problem of food. It was a huge discovery that turned air into “gold”. The two scientists who contributed to the development of this technology were awarded the Nobel Prize in Chemistry. In the book, the development of this technology coincided with World War I and World War II. Nitrogen is not only the main ingredient for fertilizer but also bombs. So in times of war, the same technology is used to make bombs. The same technology that was key to solving the food problem significantly contributed to bombard wide area. There may be lesser damage on this planet at the wars if this technolgy wasn't invented. Every technology has two sides.

 

Soon after ChatGPT was released, a number of abuses was shared. For example, some people asked detailed instructions on how to do voice phishing or programming code to hack into certain websites. Now ChatGPT doesn't give an answer to those malicious questions because they have an internal censorship process. It's called Guardrail. One way of applying guardrail is to categorize the intent of the prompt into malicious categories. And if it's identified as one of the abuse cases, it doesn't generate answer on the prompt. I personally asked copilot a question about President Trump before the US election. My prompt wasn't really about voting, I was just curious about how he was educated as a child, and I asked him to find any information, but copilot didn't answer me at all. In this way, the chatbot's behavior is internally controlled so that it generates appropirate answers alone.

 

AI models that predict the properties of chemicals can be abused for the same reason. Most of the models we've discussed so far are used in drug development. They are used to find and filter out harmful substances to ensure that the drug does not harm people. But...? What if they were used in the opposite way? For example, to develop poisonous gases? It's possible. Could it also be used to develop more potent cocaine? It's possible. AI research has actually grown tremendously through contributions of open source community. Open source means that all the programming code  is publicly available. In the case of hugging face, the data and models were also publicly available. This means that anyone can see the results of others endeavors for free. However, in addition to people who intend to use it for good, people who intend to use it for evil can also freely access the same material. QSAR research is also increasing. There are many research papers published in artificial intelligence conferences, with drug discovery as an important topic. There are also a growing number of models and datasets being released as open source. Could this technology be applied to chemical weapons production? Open source models can be accessed and used by anyone. Even deep fakes have been developed from open source models.

 

Open source has played a key role in driving AI research. The fact that abused AI models were developed from  the open source model raises questions about the extent to which these technologies should be open and shared. AI developed for drug discovery can be used equally for chemical weapons development. Many of the databases developed to help develop QSAR models also have the potential to be used for chemical weapons development. Novichok, the nerve agent used to assassinate high-profile figures in Russia, is highly volatile and acts on the nervous system. We use chemicals in our daily lives that are similar to the core structure of this poison. They are insecticides. Insecticides are substances that attack the nervous system of insects. It's shocking that a substance designed to attack the human nervous system has a similar core structure to a pesticide. If you look up QSAR models, you'll find volatility prediction models. You can also find experimental values for the proteins that Novichok attacks in public databases. You can also refer to pesticide information. So there is a possibility to use AI for development of poisons, maybe stronger than Novichok. The good news is... there are no successful drug candidates discovered by AI so far. There are many failures. Similarly, the performance of finding the desired poison can be poor until now. I think it will be difficult to find the desired poison with AI alone with current technology. However, given the speed of AI development, no one is sure about what the future holds. I believe more experts are needed to use AI for good and prevent abuse cases.

'AI & Chemistry' 카테고리의 다른 글

AI predicts toxicity of marijuana  (0) 2024.11.24
AI won Nobel prize in Chemistry? well.. I don't use it much.  (0) 2024.11.23
AI doesn't know what it's doing  (0) 2024.11.21
So? How accurate is AI?  (0) 2024.11.20
How accurate is AI?  (0) 2024.11.19