With the understanding that security concerns are a fairly recent addition to the development lifecycle—at least to someone akin to the inherent programming paradigms of the late ’70s like Dr. Brooks, one aspect remains unequivocal: programmers are commissioned with the sort of creative work that is unrelentingly attached to the pursuit of perfect usability. This is true, for example, when designing Application Programming Interfaces (APIs)—the set of exposed, intermediary function calls and routines responsible for providing high-level access to predefined software resources and applications whose latticework of dissimilar technologies can be notoriously difficult to secure.
It is also common knowledge that to acquire even a subtle resemblance of functionality, software projects must first be anchored to an adequate test environment whereby code isolation can be properly conducted and application behavior safely observed. To the developers, these test environments usually present a number of additional advantages such as access to a broader collection of user data, or to specific backend system logs that would normally be under tighter scrutiny and more robust security controls.
This blog post will explore some of the cyber risks associated with insecure development environments and the challenges and trade-offs that system architects must be willing to face in safeguarding them, along with some quick tips and recommendations for the road ahead.
Risks of dev environments, and why they get hacked
There is no doubt that test environments are a necessary evil. The very tools and applications we’ve all come to know and love can attribute their primordial existence to one or more of these ecosystems as they relate to the Software Development Life Cycle (SDLC).
The staggering rate at which organizations are pushing the software development envelope, and the multitude of choices they face when it comes to hosting platforms or container alternatives, demands evermore careful planning and consideration.
Think about iterative approaches like Agile—conceived against the backdrop of the requirement for businesses to deliver results quickly and securely in consumable but manageable increments. Think also of the myriad refinements, use cases, code reviews, fuzzing techniques and, more recently, chaos engineering practices modern distributed applications must endure to be deemed production-ready. None of this would be achievable, at the proper scale, without the controlled conditions that exist in a test development environment.
That much flexibility and observability, however, comes at a price. For example, test environments are known to have less rigorous security measures and granular controls than a typical ‘live’ environment would—all under the aforementioned banner of agility.
Very frequently, providing developers with this much-desired flexibility is a convoluted effort: too narrow the privileges and programmers can be trapped in an endless barrage of access woes leading to loss of productivity and missed deadlines; too much access, and the possibility of a data breach increases dramatically. This is similar to what took place in 2018 when Shutterfly, an image sharing and printing company, warned that an employee’s credentials had been leveraged by an unauthorized source to gain access to test environments storing a treasure trove of personal information.
Besides the breakdown of the principle of least privilege and other Identity and Access Management (IAM) derivatives, situations like these entail an additional set of prominent risks that include:
Poor management and limited troubleshooting capabilities of the underlying platform (e.g., virtual servers) that can lead to a sizable misalignment between test and production results.
The entire exposure (or a key subset) of the test environment to the Internet at large, significantly increasing an organization’s attack surface in the process—this can happen when network perimeter defenses are inadequate or when cloud resources are hastily and carelessly deployed.
The use of third-party libraries, frameworks, and software modules with unknown vulnerabilities and exposures (CVEs) that can introduce additional risk. These components generally run in full-privileged mode and are not regularly examined prior to their use.
Failing to encrypt data and other software artifacts, both in motion and at rest, to protect sensitive information, or the reuse of key management infrastructure across multiple environments—this is analogous to password sharing and it is inherently insecure.
Storing credentials and/or secrets as part of the source code. From API keys to database connection strings and passwords, hardcoding this information introduces additional risk, particularly if a publicly accessible source code management (SCM) system is involved.
Insufficient time and staff to implement rigorous logging, end-to-end monitoring and documentation services resulting in substandard incident communication and response processes.
Sandboxing solutions are also a popular alternative to more traditional approaches that take advantage of technologies like virtualization to quickly deploy and test software using a narrower problem scope.
Consequently, developers can target specific, fine-grained application features without worrying too much about underlying compatibility issues. For example, sandboxes can be used to identify hard-to-find bugs, run ad hoc scenarios and demos, train personnel, and safely test emergency fixes and similar logic changes that are toilsome to conduct in production.
In addition, the very ephemeral aspect of these environments means that they can be easily removed in response to an undesired action or relocated as priorities change. Depending on the project’s size, however, some industry experts venture to suggest the need for up to five different sandbox environments. Although this level of isolation may be desired, or perhaps even encouraged, this also creates a multiplicity of endpoints that will require additional overseeing and protection.
Stay in the loop with the best infosec news, tips and tools
Follow us on Twitter to receive updates!Follow @SecurityTrails
5 tips for securing your dev environments
If you routinely engage in the practice of securing production environments, doing so for their non-production counterparts should offer a tantamount number of challenges. In keeping form with best security practices, first and foremost, understand your organization’s risk appetite and plan accordingly.
The tolerance levels associated with this activity will help you understand what is feasible and what isn’t, implement any required changes and updates, and inform your stakeholders of their roles and responsibilities for accountability purposes.
The following tips and administrative requirements are nowhere near exhaustive, but should provide a starting point to create a stable and secure substrate before considering the deployment of any test environment.
Secure your endpoints
The importance of endpoint security cannot be overstated. Fundamentally, they are a doorway into your network and with a substantial increase in the size of the mobile workforce, the use of personal devices (think BYOD), or portable storage media, proper visibility into endpoint activity is becoming increasingly challenging to maintain.
As a consequence, a growing number of enterprises are choosing to restrict the use of externally-attached devices to minimize the impact of conditions related to malware activity or data exfiltration. Furthermore, system performance and productivity must be taken into account to allow developers to work unencumbered by any vestigial user experience issues. As previously explained, these challenges are not altogether too different from those related to protecting the rest of your systems; with a few caveats.
If your organization is a cloud consumer, the shared responsibility model should be a good starting point to gain a distinct understanding of where your responsibilities lie when it comes to who secures what and how, as these systems will probably need to incorporate a marginally different set of requirements compared to their on-premise siblings. For example, cloud providers usually offer powerful real-time notifications as well as endpoint detection and response (EDR) software with enough features as to make individual administrative tasks easier to achieve.
Additionally, cloud-based offerings are capable of providing an abundance of policy and patch management integrations, asset discovery, risk assessments, and application controls, as well as data loss prevention (DLP) and encryption mechanisms with customizable actions specific to your industry. In short, the manner in which you choose to secure your endpoints will largely dictate your level of exposure and overall operational safety.
Analyze your attack surface
In conjunction with endpoint security, protecting test environments encompasses the sum of all additional paths by which attackers can establish a strategic foothold on your network; collectively, this is often referred to as your attack surface. In general, an organization’s attack surface usually consists of an unspecified number of systems, functions, network resources and user interfaces that have basically gone unnoticed or unattended to for some time.
Overlaying this information with a healthy awareness of any inactive user accounts, roles and privileges is equally important. In fact, forgotten user accounts yielding some sort of administrative capacity have been at the core of some of the most notorious data breaches in recent times.
Sizable institutions can readily elucidate that dealing with all this complexity and scrutinizing the risk scores attributed to each use case is no easy task. In relation to software development environments and database applications, for instance, attributes such as secret keys, intellectual property (in any form), and personally identifiable information (PII) are unquestionably sensitive in nature and their secrecy and integrity should prevail at all costs.
Vulnerability assessments and similar attack surface management software can provide a centralized hub or modeling framework from which to identify where your prime targets are located. Lastly, if possible, take action by engaging the services of a red team to conduct multi-layered attack exercises against both external and internal-facing assets to gain further visibility into your security architecture before it is too late.
Segment and isolate
It is not uncommon for production and test environments to share the same generic configuration. In fact, system designers and developers are encouraged to replicate the conditions and network settings found in production environments, preferably through the use of segmentation, to profile explicit behavior and deal with the unexpected in a controlled way.
For financial services providers, however, the alternative is not a choice—these organizations are required to keep matters separate as a preamble to successful auditing and other regulatory underpinnings. Additionally, this can effectively obstruct an attacker from pivoting from one environment to the next (in case of a breach) and greatly reduce the chances of unintended harm to production data from trusted parties.
Server colocation is another one of those areas where proper isolation is desired. Performance-wise, this will allow servers to scale up and down, as well as laterally, to accommodate increasing workloads as teams go about their daily activities provisioning and de-provisioning resources in search of the right balance between scalability and integration.
Segmenting these using a combination of specific resource pools and clusters can also avoid a potential denial-of-service scenario whereby the production environment is negatively impacted by resource utilization ‘bleeds’ caused by the sharing of the same parent virtual infrastructure.
Use cloud provisioning surrogates
At the time of this writing, finding an adequate framework for segmentation doesn’t have to be an uphill battle. Developers can take advantage of various feature-rich platform as a service (PaaS) offerings that include pre-configured environments and DevOps integration for superior workflows. Moreover, products such as Microsoft Azure’s Terraform can create environments tailored by size, instance counts, and other properties in a declarative and/or imperative format that is easy to consume and maintain.
This departure from more traditional deployment methods is known as infrastructure as code (IaC) and it adopts idempotent principles to guarantee statefulness and replicability across your deployment. In short, take advantage of these cloud services when possible and cut down on coding time by transferring many (if not all) of the risks associated with supporting the middleware.
As previously mentioned, however, cloud security is a shared responsibility and PaaS solutions are not exempt from this reality. It is still necessary to understand the implications surrounding inactive user accounts and misconfigured role-based access controls. If container-based application development is involved, failing to understand distributed concepts such as namespaces and orchestration will certainly cause some headaches and introduce additional security concerns—regardless of choice, always follow the principle of least privilege and audit your traffic regularly to ensure all authentication and authorization mechanisms are up to par.
Monitor your environments
When problems occur in test environments (and trust us, they do), you may be forced to check multiple sources—that is, any given combination of system and application logs, database records, API caches, and just about any other piece of technology involved. To reference a companion article that explores a handful of Amazon Web Services (AWS) attack surfaces, always alert on platform-wide actions using products such as AWS Cloudtrail.
CloudTrail delivers an important subset of governance and auditing monitoring capabilities related to AWS accounts. In essence, these web-based tools center on round-the-clock visibility and logging of any user or service activity related to API usage, including actions taken via any management console or the command line ecosystem.
Finally, trust your developers and quality assurance (QA) teams, but verify their interactions with the environment. Assume there is no room for user activity where automated processes and systems have been established.
Facilitate observability but alert judiciously—security teams are overburdened with business-critical infrastructure and alerts from test environments may easily fly under the radar (or be utterly ignored), even if these constitute early warning signs of more serious events to come.
How to find dev environments within your online infrastructure
Finding dev environments in your organization, when you have several projects and domains running at the same time, can be a tricky challenge.
Luckily here at SecurityTrails we have developed the perfect Attack Surface Reduction tool to quickly help you find any dev and testing subdomains in just a few seconds. Let’s see it in action:
As you can see, finding dev and testing subdomains takes literally seconds, and brings you the full picture of everything you need to know to detect and take proper action over all these digital assets.
If you want to discover even more, jump right into generating your own Free Attack Surface Report to get access to critical security data beyond mere dev environments ⬇️
Know the Internet surface area of your organization
No credit card needed. Available for a limited time
Test environments demand more than the balcony view approach to the challenge of preserving the confidentiality and integrity of the highly-sensitive information they contain. While many organizations invest heavily on deploying cutting-edge security around their production assets, non-production environments represent a viable (if not more effective) alternative for attackers to leverage when these are left unprotected.
As an added measure, look for any credentials your developers may have inadvertently exposed when using public repositories and similar versioning control sites, as there are some important steps you need to follow before a potential leak manifests into a full-blown data breach. The next time your organization brings to the table its newest risk management strategy, don’t forget to mention these topics if you have a say in the matter. You’ll be glad you did.