Read this CyberTech Top Voice Article to understand how to prevent the 3 most painful mistakes impacting Database and Cloud Management Resources in organizations today
With the notoriety of cybercriminals today, business leadership often devotes security efforts to thwarting malicious actors. While essential, this focus can lead to overlooking the potential damage inadvertently caused by insiders. Whether labeled a mistake or human error, it’s inevitable that internal actors will unintentionally take actions that harm the organization over time.
The Annual Outage Analysis 2024 from Uptime Institute, based on 25 years of data, shows that human error contributes to between 66% and 80% of all downtime incidents. The same survey reveals that a majority of respondents incurred costs exceeding $100,000 for their most recent outage, with 16% reporting losses of over $1 million. All things considered, these mistakes can end up costing a significant chunk of change to resolve. Unfortunately, as resources are increasingly easy to access and interact with, the chances for humans to make mistakes with them are constantly on the rise.
To help organizations minimize the cost of outages, it’s crucial to understand the common pitfalls that can affect databases and cloud resources.
3 Common Mistakes and Challenges Impacting Databases and Cloud Resources
- Mistaking Production for Staging: Security teams universally recognize the risks and financial consequences when problems impact production and take proactive steps to avoid them. That said, standing access often makes it easy for developers, and other personnel, to access production environments despite the security team’s best efforts. A common scenario involves individuals mistakenly believing they are in a staging environment while, in fact, a production tab remains open from a previous session. This combination of easy access and human error can lead developers to inadvertently make system-wide changes without realizing the impact.
- Querying a Database Too Extensively: Databases are meant to be queried, but users still need to treat them with care. While it is best practice to query a replica of the database, not the real one used for the application, mistaking one for the other is bound to happen. A simple mistake here can bring the whole database down. Even users with read-only privileges can inadvertently use the wrong command and put too much stress on the database.
- Unintentionally Dropping Tables: A single mistaken command is enough to accidentally remove a critical table or even row from a database, leading to data loss and structural damage. Dropping the table can have knock-on effects, impacting and possibly invalidating dependent data elsewhere. Recreating the table comes with a fair amount of work, including re-granting object privileges, recreating indexes, and more annoying tasks that take up time better spent elsewhere. Undoing the damage here is not always possible, making this a particularly painful mistake.
Implementing Least Privilege to Reduce Harm
People are bound to make mistakes. Given enough time, the mistakes above will impact any organization. To mitigate the harm to critical resources, organizations can take proactive steps. First and foremost, implementing the principle of Least Privilege across the organization is a key method to add measured friction to the process of accessing, and impacting, critical resources.
The principle of Least Privilege calls for granting access only to the resources required for a person to do their job – no more, no less. While this is often applied to prevent harm from a malicious actor during a breach, it also safeguards against accidental actions by well-intentioned insiders. Least Privilege is the best way to achieve this and make it just a little bit harder for them to stumble into such a mishap.
To develop further, the most significant inadvertent harm will occur within the production environment. This should be the most guarded part of an organization. As such, there should be no good reason why anyone, even a small team of skilled DevOps professionals, should have standing access to the production environment.
Recommended: Google Cloud’s Threat Horizons Report Identifies Risks to Serverless Environment
While leadership may understand the value of implementing this change, we can also expect concerns about having to go through an approval process for access to something that a developer legitimately needs, costing them valuable time. The trick to making this work is automating the policies for access.
Ultimately, organizations need to prioritize access management and implement automation that reacts to dynamic contexts, giving instant access while avoiding manual approval for everything but the most important resources. By limiting privileges, organizations can prevent employees from making changes, even if an employee accidentally gets into the wrong database, thus limiting the blast radius of mistakes. Furthermore, organizations need to approach this strategy from a productivity standpoint to keep pace with the demands of modern businesses.
Business productivity should not come at the cost of security, nor should security come at the cost of productivity. By following these steps outlined, companies will be on the right track to staying secure and compliant while maintaining productivity and agility.
Latest CyberTech News: iValue Group Partners with RSA for India Expansion
To share your insights, please write to us at news@intentamplify.com
Top CyberTech Insights and News:
Zscaler Expands Zero Trust with New Intelligent Segmentation
Quantiphi Expands Strategic Partnership with Google Cloud
Global Cybersecurity Event “CODE BLUE 2024″ Announces Full Program