Create a free account, or log in

Neural Notes: Poisoning LLMs, OpenAI doomsday prep and AI mental health in Albury

In this week’s Neural Notes: poisoning LLMs, AI mental health in Albury, OpenAI doomsday preps and Microsoft invests $5 billion in Australia.
Tegan Jones
Tegan Jones
neural notes ai
Source: Adobe Stock

Welcome back to Neural Notes: A weekly column where I condense some of the most interesting and important AI news of the week. In the latest edition, we have AI in the medical space (again!), Microsoft’s $5 billion investment into Australia, a digital ‘poison’ for LLMs being trained on stolen material, and OpenAI trying to prevent AI doomsday scenarios.

Albury’s leap into AI-driven mental health support

It’s been a good week for AI in the medtech space, with Heidi Health getting a $10 million cash injection from Blackbird Ventures, among others. The platform utilises AI to cut down on administrative tasks in medical clinics, including pre-consults, recording and summarising consultations and much more. You can read the full story here.

But there’s also a bit going on in regional NSW. This week saw the launch of the Albury Regional Mental Health Initiative. This new program introduces an AI-backed platform, aiming to enrich mental health care in remote LGAs.

According to Service NSW data, 98% of employers in Albury are SMEs. And while mental health in the workplace has become an increasingly important part of workplace culture across Australia, SMEs can find this extra challenging due to the costs and expectations of modern-day employment benefits.

To address this, Justin Clancy, Member for Albury, and the Albury Business Connect have teamed up with Leora.ai — an AI chatbot platform that boasts being powered by evidence-based psychology and human therapist support.

In addition to 24/7 support for workers, it also offers connections to human specialists who are matched based on an individual’s needs.

Their combined vision is to ensure that businesses in Albury are not only centers of economic activity but also places where mental well-being is prioritised. As Justin Clancy puts it, the aim is to make meaningful change in mental health within the community.

“I believe that through empowering our businesses to provide wrap-around mental health support for their people, we have an opportunity to shift the dial on mental health in our local community,” Clancy said

“We have a great local business community, and we can lead the way by taking this step together with businesses in our region being employers of choice.”

Leora.ai offers an array of services from AI-powered chat to human therapist guidance. Esha Oberoi, founder and CEO, highlights the tool’s potential in addressing the specific mental health challenges faced by regional communities.

Regional Australia has its set of challenges, especially in the realm of mental health. It’s noteworthy to see Albury exploring AI’s potential in this sphere. While we often associate AI-driven endeavours with larger federal or global stages, witnessing its application at a regional level is an interesting development. Regardless of the methods or technology employed, the end goal is clear: providing comprehensive mental health support tailored for each community, including those in regional Australia.

Poison in the LLM well

Researchers at the University of Chicago have developed a tool aimed at disrupting AI models that are trained on artistic imagery without permission. Aptly named ‘Nightshade’, the tool alters image pixels subtly and can mislead AI models significantly.

Nightshade was developed by Professor Ben Zhao and his team and it is the extension of their Glaze tool, which cloaks digital artwork and distorts pixels to confuse AI models when it comes to artistic style.

The concern driving this development comes from artists and creators who are wary about their work being utilised without permission in training commercial AI products. We have seen a lot of this over the last 12 months — with artists speaking out against apps such as lensa.ai. There has also been outrage in the writer community, which has led to lawsuits against major players in the space such as Meta and OpenAI.

This flared up again at the end of September when more authors — including smaller Australian ones —  discovered their work was part of the 191,000 books being used to train the Book3 LLM without their consent.

AI models require vast amounts of multimedia data to operate effectively, with many of these data points, such as images, often sourced from the internet. Nightshade seeks to counteract this by sabotaging this data in ways that confuse AI algorithms.

In an example given by the developers, Nightshade was able to transform images of dogs into what AI models interpret as cats. In tests, after being exposed to just 100 of these altered samples, the AI-generated a cat when it was asked for a dog.

This could have huge implications for the operation of generative AI. By targeting the way an AI clusters words or AIs, it can manipulate its response to specific responses and reduce the accuracy of its outputs.

Nightshade is currently in the developmental phase and awaiting peer review. It will be interesting to see where this goes because there could definitely be ways to misuse a tool like this. However, you have to admire the intention.

In a world where this technology is so fast and easily accessible, and where laws and regulations are lagging behind — it’s good to see something at least attempt to discourage intellectual property and copyright infringement. Right now, big tech models are being trained on at least some partially stolen material without compensation. And it’s unclear if there will ever be retribution for that before a big push towards synthetic data used for training.

OpenAi prepares for AI doomsday

Sam Altman Marc Benioff Salesforce Dreamforce 2023 AI
Sam Altman (left) with Marc Benioff of Salesforce at Dreamforce 2023. Image: Salesforce

OpenAI has announced a ‘Preparedness’ team that will focus on the challenges of frontier AI models. Led by Aleksander Madry — the director of MIT’s Center for Deployable Machine Learning — the team’s role is to assess capabilities, conduct evaluations, and oversee advanced models.

It will also address the risks that span across cybersecurity, chemical, biological, radiological, and nuclear threats, and autonomous replication and adaptation.

This move isn’t particularly surprising. OpenAI CEO, Sam Altman, has been vocal about the dangers of AI and how it could lead to human extinction. And that’s reflected in the announcement.

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks,” a blog post on the announcement reads.

“Managing the catastrophic risks from frontier AI will require answering questions like: How dangerous are frontier AI systems when put to misuse, both now and in the future?  How can we build a robust framework for monitoring, evaluation, prediction, and protection against the dangerous capabilities of frontier AI systems? If our frontier AI model weights were stolen, how might malicious actors choose to leverage them?”

The company also announced the development of a Risk-Informed Development Policy (RDP). This policy is set to outline procedures for rigorous evaluations of frontier model capabilities.

It will also detail protective actions and establish a governance structure for oversight throughout the AI development process.

Microsoft’s $5 billion investment into Australia’s tech future

microsoft redmond tech
Image: Tegan Jones

This week Prime Minister Anthony Albanese announced that Microsoft will invest $5 billion into Australia, with a focus on digital infrastructure. This is the largest investment from the tech giant during its four-decade presence in Australia.

According to the government, the cash splash aims to fortify the nation’s AI, cybersecurity, and cloud capabilities as well as create employment opportunities.

This financial commitment is not just about boosting Australian skills and deep tech capabilities. It’s a strategic pivot towards harnessing generative AI, a move that could potentially add $115 billion annually to Australia’s economy by 2030, according to a report by the Tech Council of Australia and Microsoft.

Read the full story here.

Thanks for reading! Do you have an AI-related tip or story? Let us know for the next edition!