AI is popping up everywhere. And people have to not only adjust to the work related and cultural implications of AI, but security ones as well. Take the impact on financial advisors. There’s a suggestion that AI can really help usher in a revolution in this industry:
According to a recent poll by Morgan Stanley Wealth Management, 72% of investors think artificial intelligence will change the way they and traders operate, and 74% of respondents think it would improve the quality of client service provided by financial advisers. 82% of respondents stated AI would never replace human counsel, and 88% agreed that the human-to-human contact with a financial adviser is crucial. However, 63% of respondents would be interested in working with an advising business that uses AI.
The poll discovered that younger investors are more excited about AI’s possibilities. Eighty-seven percent of respondents in the 35 to 44 age range said AI was a game-changer, 89% believed it will help advisors provide better client service, and 85% indicated they would be interested in working with an advisor who uses it.
Younger investors, however, share the general sample’s conviction that the advisor-client relationship will not be replaced by AI. A sample of 924 American investors participated in the online poll, which Dynata solicited and ran in April.
But others urge caution:
“While AI is clearly groundbreaking, and we are just scratching the surface of its potential impact within financial services, this data aligns with an insight we’ve known for some time: The clients who are most engaged with their financial advisors are also the most satisfied,” Jeff McMillan, MSWM’s head of analytics, data and innovation, said in a statement.
“Within this context, AI should be viewed not as a replacement of human guidance, but as a powerful tool to help turbocharge a financial advisor’s practice management and client interaction capabilities.”
There’s another thing to consider. What risks are out there by actively using AI in this industry? Ani Chaudhuri, CEO, Dasera speaks to two big risks. Privacy and data security:
As we delve into the age of AI-driven financial advising with tools like AI, it’s crucial to understand its immense potential and inherent risks. The idea of AI-assisted financial advice is groundbreaking, and it could significantly streamline processes, democratize financial planning, and deliver personalized strategies at scale. However, its implementation is not without its challenges.
From a high-level perspective, AI tools Bard leverage sophisticated algorithms and machine learning techniques to analyze vast amounts of data. They generate insights, make predictions, and offer advice based on the patterns and trends they discern.
However, such powerful tools are not without their shortcomings. One of the most pressing concerns lies in data security and privacy. Given the sensitive nature of financial data, any compromise could lead to severe consequences. AI systems are as secure as the measures put in place to protect them, and they are not immune to breaches or misuse.
The use of AI in financial advising also raises privacy concerns. As these AI tools process vast amounts of personal data to provide personalized advice, robust measures must be in place to ensure the privacy of this data. Transparency about how the data is used, stored, and protected should be a priority.
Moreover, as AI becomes more integrated with financial advising, firms must ensure robust data governance measures are in place. This includes maintaining detailed logs of AI actions and decisions for auditing purposes, having clear visibility over who has access to the AI and the data it processes, and having measures to swiftly detect and respond to any anomalies or potential security incidents.
The rise of AI in financial advising underscores the increasing importance of cybersecurity in the financial sector. While AI can revolutionize financial advising, firms must navigate this path carefully, ensuring they balance innovation and security.
Privacy risks related to AI appear in another place. Hollywood. At the moment there is a writers strike. And one of the issues on the table is AI:
The Writers Guild of America (WGA), a labour union representing writers who primarily work in film and television, began the work strike this month after reaching an impasse in negotiations with the Alliance of Motion Picture and Television Producers that represents the US entertainment industry. Part of the disagreement revolves around a WGA proposal to ban the industry from using AIs such as ChatGPT to generate story ideas or scripts for films and shows – the union wants to ensure that such technologies do not undermine writers’ compensation and writing credits.
“The fear is that AI could be used to produce first drafts of shows, and then a small number of writers would work off of those scripts,” says Virginia Doellgast at Cornell University in New York.
Now that sounds like something out of a Hollywood script. But Ani Chaudhuri, CEO, Dasera doesn’t think so. Instead he has other concerns:
The emergence of AI in Hollywood signals a paradigm shift in content creation. This technology holds the potential to unlock new creative avenues, but we should recognize the distinct challenges it introduces.
AI may streamline the production process but cannot replace the human touch in storytelling. Content ‘perfection’ cannot be defined by algorithms alone. Artistry, after all, thrives on spontaneity, innovation, and human emotion – elements AI cannot replicate in its entirety. While AI can augment the creative process, the fear of artists being entirely replaced is unwarranted. The challenge lies in striking the right balance where AI complements human creativity rather than supplants it. This is how our marketing team is working with various AI tools today.
The incorporation of AI also introduces fresh data security and privacy risks. As AI models consume vast amounts of data for training and development, this data could be misused or mishandled, potentially leading to breaches. There’s also the risk of ‘deepfakes,’ manipulated videos created using AI, which could tarnish reputations or spread disinformation.
Studios and streaming platforms must take these risks seriously. This necessitates robust data security and governance frameworks to protect sensitive information and uphold the privacy of creators and audiences. Data access should be strictly regulated on a need-to-know basis, with clear visibility over who is accessing what data and why. Regular audits should be conducted to detect any anomalies or potential data misuse.
Moreover, cybersecurity measures must extend to AI tools to ensure they’re not manipulated for malicious intent. Clear guidelines should be established for the ethical use of AI, and these should be transparent to all stakeholders involved, including creators and audiences.
The marriage of Hollywood and AI is exciting, but it must be navigated thoughtfully to protect the creative process, uphold security, and maintain trust.
The concerns that Mr. Chaudhuri has sound like the ones that he has with the financial industry. That suggests to me that maybe people aren’t focused enough on privacy and security when it comes to AI. So instead of thinking about jobs being lost, or Skynet from the Terminator movies destroying all humanity, maybe the conversation needs to shift to more practical matters seeing as privacy and security are today problems?
MU Healthcare Suffers A Data Breach Via An Insider
Posted in Commentary with tags Hacked on May 19, 2023 by itnerdMU Healthcare has posted a data breach notification that got my attention today:
Upon learning on March 20, 2023, that a workforce member may have been accessing health information in the electronic medical record (EMR) inappropriately, we immediately began an investigation and suspended the workforce member’s access to the EMR.
The subsequent investigation revealed the workforce member used the electronic medical record (EMR) to access 736 medical records between July 2021 and March 2023 potentially without a verified Health Insurance Portability and Accountability Act (HIPAA) purpose.
The accesses may have contained patient information including name, date of birth, medical record number, and limited treatment and/or clinical information, such as diagnostic and/or procedure information.
To date, there is no indication that the information was misused or re-disclosed. However, MU Health Care began mailing notification letters to patients whose information may have been inappropriately accessed, alerting them to the incident and advising them to be vigilant in the event of any suspicious activity involving their accounts.
Ani Chaudhuri, CEO, Dasera had this comment:
The news about the data breach at MU Health Care underscores a widespread challenge within the industry: keeping sensitive patient data secure. While it’s distressing to see another breach, especially one involving an insider threat, it’s important to view this situation not just as an isolated incident, but as a symptom of a larger, systemic issue in data security.
The breach in question involved an employee accessing over 700 patient records without verified HIPAA purpose. It’s easy to point fingers at a single wrongdoer, but such incidents also highlight the need for more robust, automated security controls that can detect and prevent unauthorized access in real time.
At the heart of this is a two-pronged challenge: ensuring that only authorized personnel have access to sensitive patient data, and monitoring that this access is being used appropriately. However, this isn’t as simple as it may sound. Today’s healthcare environment is complex and constantly evolving, with thousands of staff needing various levels of access to patient data. Determining what constitutes “appropriate” access in such a fluid context is a nontrivial task, one that demands a solution more sophisticated than manual reviews or basic access controls.
MU Health Care’s decision to utilize workforce education to train for appropriate access to patient information is commendable, and it’s a crucial step towards cultivating a security-first mindset among staff. However, training alone may not be enough to prevent all instances of inappropriate data access, as evidenced by the recent breach.
Therefore, in tandem with training initiatives, there is a pressing need for comprehensive and automated data governance and security solutions. These technologies not only help detect inappropriate data access and use, but they also work proactively to establish an environment where such breaches are much less likely to occur.
I’m confident that MU Health Care, like many other organizations that have unfortunately found themselves in a similar situation, will not only learn from this incident but will also work towards implementing these enhanced data security measures. Data breaches can be a wake-up call, a chance to reassess and improve our data protection strategies – because at the end of the day, protecting patient data is not just about maintaining trust and compliance; it’s about safeguarding the very essence of healthcare itself.
This situation is not good at all. An insider who leaks data is in some ways worse than getting pwned by hackers. Organizations need to ensure to the best of their ability that insiders are not going to be a bigger threat than hackers trying to break in.
Leave a comment »