Archive for ChatGPT

ChatGPT Under Investigation By The Canadian Privacy Commissioner… Though It Has Other Issues Elsewhere

Posted in Commentary with tags on April 5, 2023 by itnerd

The new hotness in AI known as ChatGPT is now under investigation by the Canadian Privacy Commissioner because of a complaint alleging the company is collecting, using, and disclosing personal information without proper permission:

“AI technology and its effects on privacy is a priority for my Office,” Privacy Commissioner Philippe Dufresne says. “We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as Commissioner.”

The investigation into OpenAI, the operator of ChatGPT, was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent.

As this is an active investigation, no additional details are available at this time.

Well, I suppose that it could have been worse. Though it still could be as Italy has banned ChatGPT:

Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily cease processing Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations.

The regulator, which is also known as Garante, cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot.

There “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” Garante said in a statement Friday.

Garante also flagged worries over a lack of age restrictions on ChatGPT, and how the chatbot can serve factually incorrect information in its responses.

The part about it serving up factually incorrect info is a problem with ChatGPT. Take for example this from Philip N Cohen who is a Sociologist and demographer at the University of Maryland via Mastodon:

Serious question: Can you sue an AI for this?

In any case, this is a huge problem for the makers of ChatGPT who is OpenAI, and those who back it. This combined with the fact that it appears to hoover up your data makes this tool problematic. Thus I would not be at all surprised if more countries crack down on ChatGPT in some way, shape, or form. That brings me to another point, what happens if you’re a company who’s integrated ChatGPT into your products? What then? That’s an interesting question that I think we’re going to find out what the answer is shortly.

ChatGPT Is Good At Many Things…. Including Creating Malware

Posted in Commentary with tags on January 19, 2023 by itnerd

By now you have heard of ChatGPT by OpenAI. It has a lot of abilities including the ability to learn, come up with great ideas, and apparently it can create malware too:

ChatGPT took the world by storm being released less than two months ago, it has become prominent and is used everywhere, for a wide variety of tasks – from automation tasks to the recomposition of 18th century classical music. Its impressive features offer fast and intuitive code examples, which are incredibly beneficial for anyone in the software business. However, we find that its ability to write sophisticated malware that holds no malicious code is also quite advanced, and in this post, we will walk through how one might harness ChatGPT power for better or for worse.

And:

ChatGPT could easily be used to create polymorphic malware. This malware’s advanced capabilities can easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary. 

That’s not reassuring to say the least. And Christopher Prewitt, CTO of Inversion6 has this to say:

“ChatGPT is going to be a significant impact to everyone’s lives very quickly and is proving to be of substantial impact. The security community has immediately taken note and its alleged that attackers have already been using this technology to create phishing emails and script kiddies taking advantage of this to improve their tactics and tooling. Security researchers have been testing the bounds of this technology to stretch its capabilities from creating malware to analyze and translate code.”

The take home message is that we’re likely in for a very scary and bumpy ride given the capabilities of ChatGPT. Hopefully there are checks and balances to stop it from becoming the terrifying SkyNet from the Terminator movies.

UPDATE: I have additional commentary from Jack Nichelson, CISO of Inversion6:

The use of AI-assisted coding has the potential to revolutionize the way we develop software, but it also poses new risks to cybersecurity. It is important for organizations to understand the potential for malicious use of AI models and take proactive steps to mitigate these risks. This includes investing in security research and development, proper security configuration and regular testing, and implementing monitoring systems to detect and prevent malicious use.

It is important to note that the emergence of AI-assisted coding is a new reality that we must learn to adapt to and be proactive in securing against potential threats. The ability reduce or even automate the development process using AI is a double-edged sword, and it’s important for organizations to stay ahead of the curve by investing in security research and development.

In this scenario, the researchers were able to bypass content filters by simply asking the question more authoritatively, which suggests that the security of the system was not properly configured. This highlights the importance of proper security configuration and regular testing to ensure that systems are protected against potential threats.

It is also important to note that ChatGPT is not the only AI language model with the potential to be used for malicious purposes, other models like GPT-3 also have the same potential. Therefore, it is important for organizations to stay informed about the latest advancements in AI and its potential risks.

Furthermore, it is important to understand that the possibilities offered by AI are vast, and that we must continue to invest in research and development to stay ahead of potential threats. As the saying goes, “the future is already here, it’s just not evenly distributed”, and it is important for organizations to stay ahead of the curve by investing in security research and development to mitigate the potential risks of AI-assisted coding.