deneme bonusu
UK intelligence agency warns of dangers posed by AI chatbots | Insider Feeds %

UK intelligence agency warns of dangers posed by AI chatbots

Date:

Share:

[ad_1]

In brief: As much of the world starts using AI chatbots, concerns about their security implications are being voiced. One of these warnings comes from the UK’s National Cyber Security Centre (NCSC), which has highlighted some potential issues stemming from the likes of ChatGPT.

The NCSC, part of the UK’s GCHQ intelligence agency, published a post on Tuesday delving into the mechanics of generative AIs. It states that while large language models (LLMs) are undoubtedly impressive, they’re not magic, they’re not artificial general intelligence, and they contain some serious flaws.

The NCSC writes that LLMs can get things wrong and ‘hallucinate’ incorrect facts, something we saw with Google’s Bard during the chatbot’s first demo. The agency writes that they can be biased and are often gullible, such as when responding to leading questions; they require huge compute resources and vast data to train from scratch; and they can be coaxed into creating toxic content and are prone to injection attacks.

But the big concern is that sensitive user queries are visible to the provider – OpenAI in the cased of ChatGPT – and may be used to teach future versions of chatbots. Examples of sensitive questions could be somebody asking revealing health or relationship questions. Another hypothetical situation is a CEO asking about the best way to fire an employee.

Amazon and JPMorgan are just two companies that have advised their employees not to use ChatGPT over concerns that sensitive information could be leaked.

Another risk is the potential for stored queries, which could include personally identifiable information, being hacked, leaked, or accidentally made publicly accessible. There’s also a scenario where the LLM operator is taken over by another organization with a less rigorous approach to privacy.

Away from privacy concerns, the NCSC highlights LLMs’ ability to help cybercriminals write malware beyond their capabilities. This is something we heard about in January when security researchers discovered ChatGPT being used on cybercrime forums as both an “educational” tool and malware-creation platform. The chatbot could also be used to answer technical queries about hacking into networks or escalating privileges.

“Individuals and organizations should take great care with the data they choose to submit in prompts. You should ensure that those who want to experiment with LLMs are able to, but in a way that doesn’t place organizational data at risk,” writes the NCSC.

In related news, it was recently revealed that cybercriminals are using AI-generated personas to push malware on YouTube.

Masthead: Emiliano Vittoriosi

[ad_2]

Source link

Subscribe to our magazine

━ more like this

Understanding and Excelling in the HSC Short Syllabus in Bangladesh

Introduction: The Higher Secondary Certificate (HSC) Short Syllabus in Bangladesh has been introduced to overcome academic challenges and ensure effective learning. This comprehensive guide explores...

A Detailed Exploration of SSC Exam Routine 2024 in Bangladesh

Introduction: Embarking on the academic journey, the Secondary School Certificate (SSC) exam holds paramount significance for students in Bangladesh. This comprehensive guide navigates the intricacies...

A Comprehensive Guide to PESP Finance Gov BD

Introduction: In the intricate world of financial management, PESP Finance Gov BD emerges as a key player. This comprehensive guide explores the various aspects of...

Innovative Uses for Coffee Burlap Bags in Your Garden

Demystifying Coffee Burlap Bags Before we dive into their myriad uses, let's acquaint ourselves with coffee burlap bags. Made from robust natural burlap fibers, they're...

Unlocking the Benefits of Online Shopping with Credit Cards: Why OneCard Might Be Your Best Bet?

Indians are increasingly opting for online shopping over in-store purchases, with credit card transactions online outpacing those at physical Point of Sale (PoS) locations...
spot_img