Archive for May 19, 2023

MU Healthcare Suffers A Data Breach Via An Insider

Posted in Commentary with tags on May 19, 2023 by itnerd

MU Healthcare has posted a data breach notification that got my attention today:

Upon learning on March 20, 2023, that a workforce member may have been accessing health information in the electronic medical record (EMR) inappropriately, we immediately began an investigation and suspended the workforce member’s access to the EMR.

The subsequent investigation revealed the workforce member used the electronic medical record (EMR) to access 736 medical records between July 2021 and March 2023 potentially without a verified Health Insurance Portability and Accountability Act (HIPAA) purpose.

The accesses may have contained patient information including name, date of birth, medical record number, and limited treatment and/or clinical information, such as diagnostic and/or procedure information.

To date, there is no indication that the information was misused or re-disclosed. However, MU Health Care began mailing notification letters to patients whose information may have been inappropriately accessed, alerting them to the incident and advising them to be vigilant in the event of any suspicious activity involving their accounts.

Ani Chaudhuri, CEO, Dasera had this comment:

The news about the data breach at MU Health Care underscores a widespread challenge within the industry: keeping sensitive patient data secure. While it’s distressing to see another breach, especially one involving an insider threat, it’s important to view this situation not just as an isolated incident, but as a symptom of a larger, systemic issue in data security.

The breach in question involved an employee accessing over 700 patient records without verified HIPAA purpose. It’s easy to point fingers at a single wrongdoer, but such incidents also highlight the need for more robust, automated security controls that can detect and prevent unauthorized access in real time.

At the heart of this is a two-pronged challenge: ensuring that only authorized personnel have access to sensitive patient data, and monitoring that this access is being used appropriately. However, this isn’t as simple as it may sound. Today’s healthcare environment is complex and constantly evolving, with thousands of staff needing various levels of access to patient data. Determining what constitutes “appropriate” access in such a fluid context is a nontrivial task, one that demands a solution more sophisticated than manual reviews or basic access controls.

MU Health Care’s decision to utilize workforce education to train for appropriate access to patient information is commendable, and it’s a crucial step towards cultivating a security-first mindset among staff. However, training alone may not be enough to prevent all instances of inappropriate data access, as evidenced by the recent breach.

Therefore, in tandem with training initiatives, there is a pressing need for comprehensive and automated data governance and security solutions. These technologies not only help detect inappropriate data access and use, but they also work proactively to establish an environment where such breaches are much less likely to occur.

I’m confident that MU Health Care, like many other organizations that have unfortunately found themselves in a similar situation, will not only learn from this incident but will also work towards implementing these enhanced data security measures. Data breaches can be a wake-up call, a chance to reassess and improve our data protection strategies – because at the end of the day, protecting patient data is not just about maintaining trust and compliance; it’s about safeguarding the very essence of healthcare itself.

This situation is not good at all. An insider who leaks data is in some ways worse than getting pwned by hackers. Organizations need to ensure to the best of their ability that insiders are not going to be a bigger threat than hackers trying to break in.

When We Think Of The The Risks Of AI, Are We Thinking About The Right Risks?

Posted in Commentary with tags , on May 19, 2023 by itnerd

AI is popping up everywhere. And people have to not only adjust to the work related and cultural implications of AI, but security ones as well. Take the impact on financial advisors. There’s a suggestion that AI can really help usher in a revolution in this industry:

According to a recent poll by Morgan Stanley Wealth Management, 72% of investors think artificial intelligence will change the way they and traders operate, and 74% of respondents think it would improve the quality of client service provided by financial advisers. 82% of respondents stated AI would never replace human counsel, and 88% agreed that the human-to-human contact with a financial adviser is crucial. However, 63% of respondents would be interested in working with an advising business that uses AI.

The poll discovered that younger investors are more excited about AI’s possibilities. Eighty-seven percent of respondents in the 35 to 44 age range said AI was a game-changer, 89% believed it will help advisors provide better client service, and 85% indicated they would be interested in working with an advisor who uses it.

Younger investors, however, share the general sample’s conviction that the advisor-client relationship will not be replaced by AI. A sample of 924 American investors participated in the online poll, which Dynata solicited and ran in April.

But others urge caution:

“While AI is clearly groundbreaking, and we are just scratching the surface of its potential impact within financial services, this data aligns with an insight we’ve known for some time: The clients who are most engaged with their financial advisors are also the most satisfied,” Jeff McMillan, MSWM’s head of analytics, data and innovation, said in a statement.

“Within this context, AI should be viewed not as a replacement of human guidance, but as a powerful tool to help turbocharge a financial advisor’s practice management and client interaction capabilities.”

There’s another thing to consider. What risks are out there by actively using AI in this industry? Ani Chaudhuri, CEO, Dasera speaks to two big risks. Privacy and data security:

As we delve into the age of AI-driven financial advising with tools like AI, it’s crucial to understand its immense potential and inherent risks. The idea of AI-assisted financial advice is groundbreaking, and it could significantly streamline processes, democratize financial planning, and deliver personalized strategies at scale. However, its implementation is not without its challenges.

From a high-level perspective, AI tools Bard leverage sophisticated algorithms and machine learning techniques to analyze vast amounts of data. They generate insights, make predictions, and offer advice based on the patterns and trends they discern.

However, such powerful tools are not without their shortcomings. One of the most pressing concerns lies in data security and privacy. Given the sensitive nature of financial data, any compromise could lead to severe consequences. AI systems are as secure as the measures put in place to protect them, and they are not immune to breaches or misuse.

The use of AI in financial advising also raises privacy concerns. As these AI tools process vast amounts of personal data to provide personalized advice, robust measures must be in place to ensure the privacy of this data. Transparency about how the data is used, stored, and protected should be a priority.

Moreover, as AI becomes more integrated with financial advising, firms must ensure robust data governance measures are in place. This includes maintaining detailed logs of AI actions and decisions for auditing purposes, having clear visibility over who has access to the AI and the data it processes, and having measures to swiftly detect and respond to any anomalies or potential security incidents.

The rise of AI in financial advising underscores the increasing importance of cybersecurity in the financial sector. While AI can revolutionize financial advising, firms must navigate this path carefully, ensuring they balance innovation and security.

Privacy risks related to AI appear in another place. Hollywood. At the moment there is a writers strike. And one of the issues on the table is AI:

The Writers Guild of America (WGA), a labour union representing writers who primarily work in film and television, began the work strike this month after reaching an impasse in negotiations with the Alliance of Motion Picture and Television Producers that represents the US entertainment industry. Part of the disagreement revolves around a WGA proposal to ban the industry from using AIs such as ChatGPT to generate story ideas or scripts for films and shows – the union wants to ensure that such technologies do not undermine writers’ compensation and writing credits.

“The fear is that AI could be used to produce first drafts of shows, and then a small number of writers would work off of those scripts,” says Virginia Doellgast at Cornell University in New York.

Now that sounds like something out of a Hollywood script. But Ani Chaudhuri, CEO, Dasera doesn’t think so. Instead he has other concerns:

The emergence of AI in Hollywood signals a paradigm shift in content creation. This technology holds the potential to unlock new creative avenues, but we should recognize the distinct challenges it introduces.

AI may streamline the production process but cannot replace the human touch in storytelling. Content ‘perfection’ cannot be defined by algorithms alone. Artistry, after all, thrives on spontaneity, innovation, and human emotion – elements AI cannot replicate in its entirety. While AI can augment the creative process, the fear of artists being entirely replaced is unwarranted. The challenge lies in striking the right balance where AI complements human creativity rather than supplants it. This is how our marketing team is working with various AI tools today.

The incorporation of AI also introduces fresh data security and privacy risks. As AI models consume vast amounts of data for training and development, this data could be misused or mishandled, potentially leading to breaches. There’s also the risk of ‘deepfakes,’ manipulated videos created using AI, which could tarnish reputations or spread disinformation.

Studios and streaming platforms must take these risks seriously. This necessitates robust data security and governance frameworks to protect sensitive information and uphold the privacy of creators and audiences. Data access should be strictly regulated on a need-to-know basis, with clear visibility over who is accessing what data and why. Regular audits should be conducted to detect any anomalies or potential data misuse.

Moreover, cybersecurity measures must extend to AI tools to ensure they’re not manipulated for malicious intent. Clear guidelines should be established for the ethical use of AI, and these should be transparent to all stakeholders involved, including creators and audiences.

The marriage of Hollywood and AI is exciting, but it must be navigated thoughtfully to protect the creative process, uphold security, and maintain trust.

The concerns that Mr. Chaudhuri has sound like the ones that he has with the financial industry. That suggests to me that maybe people aren’t focused enough on privacy and security when it comes to AI. So instead of thinking about jobs being lost, or Skynet from the Terminator movies destroying all humanity, maybe the conversation needs to shift to more practical matters seeing as privacy and security are today problems?

Guest Post: Out of Sight, Out of Mind? The Dangers of Forgetting About WORM Data Retention Periods

Posted in Commentary on May 19, 2023 by itnerd

By Michael Jack, CRO and Co-Founder, Datadobi

The Importance of Storing Certain Data Under WORM Control

WORM data stands for Write Once, Read Many data, which refers to data storage that allows data to be written only once and then read many times, but cannot be modified. WORM data is required for a variety of reasons, particularly in industries where data integrity, authenticity, and long-term preservation are critical. Examples include financial services, healthcare, legal, telecommunications, and government sectors. In these industries, WORM data is used to store important records, such as financial transactions, medical records, legal contracts, and government archives. The data is stored in a format that cannot be altered or deleted, ensuring that the information is preserved in its original state and that it is available for audit or legal purposes if necessary. WORM data is also used to protect against data tampering and ensure compliance with various industry regulations, such as SEC Rule 17a-4(f), HIPAA, and GDPR.

In response to these requirements, storage vendors have incorporated WORM capabilities into their systems. These capabilities allow for the creation of special storage areas where data can be stored for a specified period of time without being deleted or modified. The retention date of each file is stored individually as it may vary. For instance, some files may need to be retained for seven years while others may need to be kept for 10 years, and so on.

The Risks Associated with Storing WORM Data Beyond Required Retention Periods

While it is critical to be able to ensure that WORM data is stored for the required retention period, it is equally important to be able to identify when the retention period has expired and the necessary next steps to be taken, for several reasons. First, retaining data beyond the required retention period can result in unnecessary storage costs and consume valuable resources. Second, storing data beyond the required retention period can also increase the risk of data breaches, as the longer data is stored, the more opportunities there are for unauthorized access or theft. Third, it can also create legal and regulatory risks, as organizations can be held liable for retaining data that they are not authorized to keep. And last but not least – in fact many might argue most importantly… any legal e-discovery activity resulting from litigation would capture all the organization’s data – that is, all the data that is still required to be retained, as well as all the data that could have been deleted.

It’s Time for a WORM Data Governance Plan

The first step in overcoming these challenges is the development of a WORM Data Governance Plan. Ideally, it should involve various groups within the organization and provide clear policies and guidelines for managing and protecting data. This includes specifying retention periods, data classification and labeling, and implementation of data access controls, encryption, and monitoring tools. Regular audits and assessments should be conducted to identify vulnerabilities and ensure compliance with relevant regulations.

Groups involved in the development and implementation of the data governance plan include IT professionals, legal and compliance teams, business units, and risk management and audit teams. A collaborative approach involving all relevant groups can ensure the plan is comprehensive, effective, and tailored to your organization’s needs and goals.

Simpler In Theory Than In Practice

Unfortunately, it can be difficult to keep track of WORM data retention periods, especially when dealing with a large amount of data stored in this format. While some storage systems that house WORM data may have functionality that allow for “retention release” not all applications that write data can track retention periods and prune datasets as needed. Even in cases where an application has this functionality, it can be brittle due to the loose coupling between the data being stored, the data storage system(s), and the application’s visibility to that data.

Improve WORM Data Governance and Reduce Risks with StorageMAP

But the good news is that there is a solution that can help you mitigate these risks and improve your data governance practices: StorageMAP. StorageMAP can identify files that have exceeded their retention period; it can then produce lists of those files so that they can be acted upon, either by moving them to a different storage system or deleting them

In conclusion, a well-managed WORM data environment can offer tremendous benefits for organizations. By effectively managing WORM data throughout its lifecycle, organizations can enhance their compliance posture, improve operational efficiency, and build trust with customers and stakeholders. Investing in a comprehensive data management solution, such as StorageMAP, can help organizations to achieve these benefits and set themselves apart from their competitors. So don’t wait until it’s too late. Act now to implement effective WORM data management and gain a strategic edge in today’s data-driven landscape.