Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 When Data Is Currency, Who’s Responsible for Its Security? Tue, 11 Feb 2020 13:13:38 -0600 In a year that was all about data and privacy, it seems only fitting that we closed out 2019 in the shadow of a jumbo data leak where more than a billion records were found exposed on a single server.

Despite this being one of the largest data exposures from a single source in history, it didn’t cause nearly the public uproar that one might expect from a leak involving personal information such as names, email addresses, phone numbers, LinkedIn and Facebook profiles. Instead, this quickly became yet another case of consumer information being mishandled, impacting many of the same consumers that have been burned several times already by companies they trusted.

What’s different about this leak – and what should have given consumers and businesses alike pause – is the way in which this case highlights a more complex problem with data that exists today.

There’s no question that data is a very valuable asset. Organizations have done a great job figuring out how to capture consumer data over the last decade and are now beginning to use and monetize it. The problem is, that data can also be used in many different ways to inflict serious pain on victims in their personal and business lives. So, when that data goes through someone’s hands (business or individual), how much responsibility do they – and those up the lifecycle chain – have for where it ends up?

Beginning at the consumer level, users can opt out of sharing data and should do so at any chance they get if they are concerned about having their information exposed. The good news is that new regulations like the GDPR and CCPA are making this easier to do retroactively than ever before. The challenge is that the system isn’t perfect. Aliases and other databases can still be difficult to opt out of because although they may have information captured, errors like misspellings can prevent consumers from getting to their own data.

With this particular incident, we also caught a glimpse of the role that data enrichment, aggregators and brokers play in security. Although it didn’t come directly from their own servers, the exposed data was likely tied to enrichment firms People Data Labs (PDL) and OxyData. While several data brokers today are taking more responsibility and offering security and privacy education to their customers, it was alarming to see that neither data broker in this case could rule out the possibility that their data was mishandled by a customer. In fact, rather than pushing for a solution, Oxydata seemed to shirk responsibility entirely when speaking with WIRED.

Data brokers need to own up to this challenge and look at better screening of their customers to ensure their use of data has valid purposes. A case study by James Pavur, DPhil student at Oxford University, underscored these failings in the system when he used GDPR Subject Access Requests to obtain his data from about 20 companies, many of which didn't ask for sufficient ID before sharing the information. He went on to try and get as much data as possible about his fiancée, finding he could access a range of sensitive data, including everything from addresses and credit card numbers to travel itineraries. None of this should be possible with proper scredaening in place.

Ultimately, whoever owns the server where the leak originated is the one that will be held legally and fiscally responsible. But should data brokers be emulating the shared responsibility model in use by cloud services like AWS? Either way, by understanding the lifecycle of data and taking additional responsibility upstream, we can begin to cut down on the negative impact when exposures like this inevitably occur.

About the author: Jason Bevis is the vice president of Awake Security Labs at Awake Security. He has extensive experience in professional services, cybersecurity MDR solutions, incident response, risk management and automation products.

Copyright 2010 Respective Author at Infosec Island]]>
SEC Shares Cybersecurity and Resiliency Observations Thu, 30 Jan 2020 14:09:56 -0600 The U.S. Securities and Exchange Commission (SEC) this week published a report detailing cybersecurity and operational resiliency practices that market participants have adopted. 

The 10-page document (PDF) contains observations from the SEC's Office of Compliance Inspections and Examinations (OCIE) that are designed to help other organizations improve their cybersecurity stance.

OCIE examines SEC-registered organizations such as investment advisers, investment companies, broker-dealers, self-regulatory organizations, clearing agencies, transfer agents, and others.

Through its reviews, OCIE has observed approaches that some organizations have taken in areas such as governance and risk management, access rights and controls, data loss prevention, mobile security, incident response and resiliency, vendor management, and training and awareness. 

Observed risk management and governance measures include senior level engagement, risk assessment, testing and monitoring, continuous evaluation and adapting to changes, and communication. Practices observed in the area of vendor management include establishing a program, understanding vendor relationships, and monitoring and testing. 

Strategies related to access rights and controls that were observed include access management and access monitoring. Utilized data loss prevention measures include vulnerability scanning, perimeter security, patch management, encryption and network segmentation, and insider threat monitoring, among others. 

In terms of mobile security, organizations adopted mobile device management (MDM) applications or similar technology, implemented security measures, and trained employees. Strategies for incident response include inventorying core business operations and systems, and assessing risk and prioritizing business operation. 

By sharing these observations, SEC hopes to determine organizations to review their practices, policies and procedures and assess their level of preparedness. 

The presented measures should help any organization become more secure, OCIE says, admitting that “there is no such thing as a “one-size fits all” approach.” In fact, it also points out that not all of these practices may be appropriate for all organizations. 

“Through risk-targeted examinations in all five examination program areas, OCIE has observed a number of practices used to manage and combat cyber risk and to build operational resiliency. We felt it was critical to share these observations in order to allow organizations the opportunity to reflect on their own cybersecurity practices,” Peter Driscoll, Director of OCIE, said. 

RelatedCyber Best Practices Requires a Security-First Approach

Related: Best Practices for Evaluating and Vetting Third Parties

Related: Perception vs. Reality in Federal Government Security Practices

Copyright 2010 Respective Author at Infosec Island]]>
What Does Being Data-Centric Actually Look Like? Fri, 17 Jan 2020 09:46:22 -0600 “Data-centric” can sometimes feel like a meaningless buzzword. While many companies are vocal about the benefits that this approach, in reality, the term is not widely understood.

One source of confusion is that many companies have implemented an older approach – that of being “data-driven” – and just called this something else. Being data-centric is not the same as being data-driven. And, being data-centric brings new security challenges that must be taken into consideration. 

A good way of defining the difference is to talk about culture. In Creating a Data-Driven Organization, Carl Anderson starts off by saying, “Data-drivenness is about building tools, abilities, and, most crucially, a culture that acts on data.” In short, being data-driven is about acquiring and analyzing data to make better decisions.

Data-centric approaches build on this but change the managerial hierarchy that informs it. Instead of data teams collecting data, management teams making reports about it, and then CMOs taking decisions, data centrism aims to give everyone (or almost everyone) direct access to the data that drives your business. In short, creating a data-driven culture is no longer enough: instead, you should aim to make data the core of your business by ensuring that everyone is working with it directly.

This is a fairly high-level definition of the term, but it has practical implications. Implementing a data-centric approach includes the following processes.

1. Re-Think Your Organizational Structure

Perhaps the most fundamental aspect of data-centric approaches is that they rely on innovative (and sometimes radical) management structures. As Adam Chicktong put it a few years ago, these structures are built around an inversion of traditional hierarchies: instead of decisions flowing from executives through middle management to data staff, in data-centric approaches everyone’s “job is to empower their team do their job and better their career”.

This has many advantages. In a recent CMO article, Maile Carnegie talked about the ‘frozen middle’ where middle-management is inherently structured to resist change. By looking closely at your hierarchy and identifying departments and positions likely to resist change, you’ll be able to streamline the structure to allow transformation to more easily filter through the business. As she puts it, “Increasingly, most businesses are getting to a point where there are people in their organization who are no longer are experts in a craft, and who have graduated from doing to managing and basically bossing other people around and shuffling PowerPoints.”

2. Empowering the Right People

Once these novel managerial structures are in place, the focus must necessarily shift toward empowering, rather than managing, staff. Effectively employing a data-centric approach means giving the right people access to the data that underpins your business, but also allowing them to affect the types of data you are collecting. 

Let’s take access first. At the moment, many businesses (and even many of those that claim to be data-driven) employ extremely long communicative chains to work with the data they collect. IT staff report their findings, ultimately, to the executive level, who then disseminate this to marketing, PR, risk and HR departments. One of the major advantages of new data infrastructures, and indeed one of the major advantages of cloud storage, is that you can grant these groups direct access to your cloud storage solution. 

Not only does this cut down the time it takes for data to flow to the "correct" teams, making your business more efficient. If implemented skillfully, it can also be a powerful way of eliciting input from them on what kinds of data you should be collecting. Most businesses would agree, I think, that executives don't always have a granular appreciation for the kind of data that their teams need. Empowering these teams to drive novel forms of data collection short-circuits these problems by encouraging direct input into data structures.

3. Process Not Event

Third, transitioning to a data-centric approach entails not just a change in managerial structure, responsibility, and security. At the broadest level, this approach requires a change in the way that businesses think about development.

Nowadays, running an online business is not as simple as identifying a target audience, creating a website, and waiting to see if it is effective. Instead, the previously rigid divide between the executive, marketing, and data teams means that every business decision should be seen as a process, not an event.

4. Security and Responsibility

Ultimately, it should also be noted that changing your managerial structure in this way, and empowering teams to take control of your data collection processes, also raises significant problems when it comes to security.

At a basic level, it’s clear that dramatically increasing the number of people with access to data systems simultaneously makes these systems less secure. For that reason, implementing a data-centric approach must also include the implementation of extra security measures and tools. 

These include managerial systems to ensure responsible data retention, but also training for staff who have not worked with data before, and who may not know how to take basic security steps like using secure browsers and connecting to the company network through a VPN when using public WiFi. On the other hand, data centrism can bring huge benefits to the overall security of organizations. 

Alongside the approach’s contribution to marketing and operational processes, data-centric security is also now a field of active research. In addition, the capability to share emerging threats with almost everyone in your organization greatly increases the efficacy of your cybersecurity team.

Data-centric approaches are a powerful way of increasing the adaptability and profitability of your business, but you should also note that becoming truly data-centric involves quite radical changes in the way that your business is organized. Done correctly, however, this transition can offer huge advantages for almost any business.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island]]>
The Big 3: Top Domain-Based Attack Tactics Threatening Organizations Fri, 17 Jan 2020 09:37:38 -0600 Nowadays, businesses across all industries are turning to owned websites and domains to grow their brand awareness and sell products and services. With this dominance in the e-commerce space, securing owned domains and removing malicious or spoofed domains is vital to protecting consumers and businesses alike. This is especially important because domain impersonation is an increasingly popular tactic among cybercriminals. One example of this is ‘look-a-like’ urls that trick customers by mimicking brands through common misspellings, typosquatting and homoglyphs. With brand reputation and customer security on the line, investing in domain protection should be a top priority for all organizations.

Domain-based attacks are so popular, simply because of how lucrative they can be. As mentioned above, attackers often buy ‘look-alike’ domains in order to impersonate a specific brand online. To do this, bad actors can take three main approaches: copycatting, piggybacking and homoglyphs/typosquatting. From mirroring legitimate sites to relying on slight variations that trick an untrained eye, it’s important to understand these top tactics cybercriminals use so you can defend your brand and protect customers. Let’s explore each in more detail.

1. Copycatting Domains

One tactic used by bad actors is to create a site that directly mirrors the legitimate webpage. Cybercriminals do so by copying a top-level domain (TLD), or TLD, that the real domain isn’t using, or by appending multiple TLDs to a domain name. With these types of attacks, users are more likely to be tricked into believing they are interacting with the legitimate organization online. This simplifies the bad actor’s journey as the website appears to be legitimate, and will be more successful than an attack using a generic, throwaway domain. To amplify these efforts, bad actors will also use text and visuals that customers would expect to see on a legitimate site, such as the logo, brand name, and products. This sense of familiarity and trust puts potential victims at ease and less aware of the copycat’s red flags. 

2. Piggybacking Name Recognition

The first approach attackers utilize is spoofed or look-alike domains that help them appear credible by piggybacking off the name recognition of established brands. These domains may be either parked or serving live content to potential victims. Parked domains are commonly leveraged to generate ad revenue, but can also be used to rapidly serve malicious content. They are also often used to distribute other brand-damaging content, like counterfeit goods.

3. Tricking Victims with Homoglyphs and Typosquatting

This last tactic has two main methods --  typosquatting and homoglyphs -- and looks for ways to trick unsuspecting internet users where they are unlikely to look or notice they are being spoofed. 

  • Typosquatting involves the use of common URL misspellings that either a user is likely to make on their own accord or that users may not notice at all, i.e. adding a letter to the organization’s name. If an organization has not registered domains that are close to their legitimate domain name, attackers will often purchase them to take advantage of typos. Attackers may also infringe upon trademarks by using legitimate graphics or other intellectual property to make malicious websites appear legitimate.
  • With homoglyph, the basic principles of domain spoofing remain the same, but an attacker may substitute a look-a-like character of an alphabet other than the Latin alphabet -- i.e., the Cyrillic “а” for the Latin “a.” Although these letters look identical, their Unicode values is different and as such, they will be processed differently by the browser. With over 100,000 Unicode characters in existence, bad actors have an enormous opportunity. Another benefit of this type of attack is that they can be used to fool traditional string matching and anti-abuse algorithms. 

Why domain protection is necessary

Websites are a brand’s steadfast in the digital age, as they are often the first source of engagement between a consumer, partner, prospective employee and your organization. Cyberattackers see this as an opportunity to capitalize on that interaction. If businesses don’t take this problem seriously, their brand image, customer loyalty and ultimately financial results will be at risk. 

While many organizations monitor domains related to their brand in order to ensure that their brand is represented in the way it is intended, this is challenging for larger organizations composed of many subsidiary brands. Since these types of attacks are so common and the attack surface is so large, organizations tend to feel inundated with alerts and incidents. As such, it is crucial that organizations proactively and constantly monitor for domains that may be pirating their brand, products, trademarks or other intellectual property.

About the author: Zack Allen is both a security researcher and the director of threat intelligence at ZeroFOX. Previously, he worked in threat research for the US Air Force and Fastly.

Copyright 2010 Respective Author at Infosec Island]]>
Security Compass Receives Funding for Product Development and Expansion Fri, 17 Jan 2020 08:39:00 -0600 Toronto, Canada-based Security Compass has received additional funding from growth equity investment firm FTV Capital. The amount has not been disclosed, indicating that it is likely to be on the smaller side.  

According to the security firm, the purpose of the cash injection is to allow it to enhance its product portfolio and accelerate a planned global expansion.  

The company was founded by Nish Bhalla in 2005. Former COO Rohit Sethi becomes the new CEO. Bhalla remains on the Board, and is joined by Liron Gitig and Richard Liu from FTV Capital.  

Long-serving Sethi was Security Compass' first hire, and was an integral part of the creation of the company's SD Elements platform -- now the focus of the firm' operations. SD Elements helps customers put the Sec into DevOps without losing DevOps's development agility.   

"The strong trends towards agile development in DevOps," he says, "increased focus on application security and on improving risk management are on course for collision. Security Compass is uniquely positioned to help organizations address the inherent conflicts. With FTV's investment, we're poised to accelerate our growth while maintaining the culture of excellence we've worked so hard to build."  

The worldwide growth in security and privacy regulations, such as GLBA, FedRAMP, GDPR, CCPA and many others, requires that security is built into the whole product development lifecycle. "Security Compass' SD Elements solution," says FTV Capital partner Gitig, "is uniquely focused on the software stack, enabling DevOps at scale by helping enterprises develop secure, compliant code from the start."  

He continued, "SD Elements provides both engineering and non-engineering teams with a holistic solution for managing software security requirements in an efficient and reliable manner, alleviating meaningful friction in the software development life cycle, accelerating release cycles and improving business results. We are excited to work with the Security Compass management team in its next phase of global growth as a trusted information security partner."  

Security Compass claims more than 200 enterprise customers in banks, federal government and critical industries use its solutions to manage the risk of tens of thousands of applications.  

RelatedChef Launches New Version for DevSecOps Automated Compliance 

RelatedChatOps is Your Bridge to a True DevSecOps Environment 

RelatedShifting to DevSecOps Is as Much About Culture as Technology and Methodology   

Copyright 2010 Respective Author at Infosec Island]]>
Password Shaming Isn’t Productive – Passwords Are Scary Business Wed, 15 Jan 2020 14:25:53 -0600 We’ve all been in the situation trying to set a new password – you need one uppercase character, one number and one character from a special list. Whatever password we come up with needs to be between 8 and 24 characters long. Once created, we need to remember that password and heaven help us should we need to reset it. Yes, that’s the dreaded “you can’t reuse the last five passwords” message – but IT security requires the password to be changed every month. If you’ve lived in the corporate world, this experience is quite familiar. So too is this a common experience with most web properties.

Then along comes the dreaded “your account was part of a set of accounts which may have been breached” letter. As a consumer, you’re now left with some anxiety over what data might be in the hands of proverbial “bad guys”. Part of the anxiety comes from the prospect that these same bad guys might also now know your password, so you need to change it. If you’re like many people, that password likely was used in many places so the anxiety increases as you recall each of the websites you now need to update your password on – just to be safe.

Into this mess we have security pundits suggesting that multiple security factors are the solution. The net results being that not only do users need to remember their password, but they also need to enter a second code – often a set of numbers – in order to access their account. While each of these password complexity, password expiration, and multiple factor authentication rules can deter attempts to compromise an account, they do nothing to simplify the experience and when it comes to consumer grade devices or consumer websites, simplification is what we should be striving for.

Consider the current situation with Ring customers. It’s being reported that some users of Ring video devices are experiencing random voices speaking through their video devices. Some have even reported threats against them. These users are rightfully concerned for their safety, but some have been quick to lay the blame for the situation at the feet of the user. When someone states that “you should have a more secure password” or “you should enable 2FA”, those statements are fundamentally a form of victim shaming. The end user likely isn’t a security expert, but an expectation is being set that they should know how best to secure these devices.

The current situation with Ring devices isn’t new. We need only look back to September of 2016 when the US saw a major internet outage caused by an attack on the DNS infrastructure. This attack originated from a large quantity of DVRs, webcams and other consumer grade devices which weren’t properly password protected. At the time, there were similar cries that ‘password123’ wasn’t an effective password and users shouldn’t use it. This situation even prompted major service providers like GitHub to advise their customers to change their password – not because the user’s data had been part of a breach, but that the password had itself been part of a set of data sold on the black market.

These examples highlight a key challenge with product security– how to properly prevent unauthorized access while maintaining ease of use. This goal can’t be met if we shame users based on their security choices. Instead, product designers should look at the ways to use context to best secure systems. In the case of a video camera, access to the camera in all forms should be from approved devices. For example, if a user configured the camera from an Android phone, then that device is by definition an approved device to access the camera. Since the phone can’t be in two locations in two places at the same time, if the app is running on the phone, then there is only one possible way to access the camera until the user authorizes additional devices from within the app. This entire example doesn’t rely on password complexity to secure the camera, but rather uses user context as part of the overall system security where passwords are but one component. The net result being that while a simple password may not be advised from a security pundit perspective, the contextual information helps ensure that users don’t harm themselves. With the complexity of consumer devices only increasing, contextual security should be a priority for all – a situation which would avoid password shaming.

About the author: Tim Mackey is Principal Security Strategist, CyRC, at Synopsys. Within this role, he engages with various technical communities to understand how to best solve application security problems.

Copyright 2010 Respective Author at Infosec Island]]>
Five Key Cyber-Attack Trends for This Year Tue, 14 Jan 2020 07:21:00 -0600 ‘It’s not if, but when’ is a long-established trope in the world of cybersecurity, warning organizations that no matter how robust their defenses, nor how sophisticated their security processes, they cannot afford to be complacent.

In 2020, little has changed – and yet everything has changed. The potential scale and scope of distributed denial of service (DDoS) attacks is far greater than it ever has been. Attackers can call on massive botnets to launch attacks, thanks to the ongoing rapid growth in cloud usage and expansion of the IoT, which has given more devices and resources which can be exploited. Furthermore, the vulnerabilities that these botnets can target are challenging to protect using standard network security solutions.

So what attack types will we see during this year? Here are 5 key trends that I expect to see developing during the coming months.

Attacks will reach unprecedented scale

According to the Department for Homeland Security, the scale of DDoS attacks has increased tenfold over the last five years. The DHS has also stated that if this trend continues, it not certain whether corporate and critical national infrastructures will be able to keep up.

A perfect storm of factors is feeding into the growth in DDoS scale. Criminals are hijacking cloud resources, or simply renting public cloud capacity using stolen card details to massively amplify their attacks.  At the same time, the explosion in IoT devices gives criminals more potential recruits as soldiers for their botnet armies.  As a result, the gap between an organization’s available bandwidth on its internet connection and the size of an average DDoS attack is widening.  Even the biggest security appliances currently available cannot compete with attack volumes that in many cases are over 50 times greater than the capacity of an organization’s internet connection.

Game-changing industrialized attacks

Furthermore, DDoS attacks are no longer the realm of digital vandalism, launched primarily by individuals interested in testing their own capabilities or causing a nuisance. The underground economy is booming, with new marketplaces for cybercrime tools and techniques being introduced all the time. There is a clear recognition amongst bad actors that cyberattacks, including DDoS attacks, can be enormously profitable – whether for criminal or even political purposes.  Criminals are monetizing their investments in creating massive botnets by offering DDoS-for-hire services to anyone that wants to launch an attack, for just a few dollars per minute. 

And on the subject of politics, with a US presidential election coming up in 2020, and following recent destabilizing events in the Middle East, the potential for a major politically-motivated cyberattack is higher than ever. It would not be the first such attack – Estonia fell victim to a country-wide DDoS attack over a decade ago – but the blackout-level potential of today’s attacks is far greater. Simultaneously, it is becoming ever easier to obfuscate the true source of an attack, making definite attack attribution very difficult. From a political perspective, the ability to ‘frame’ an enemy for a large-scale attack has obvious, and worrying consequences.

Power infrastructures under targeted attack

On a related point, targeting industrial controls has become an increasing focus for nation-state attacks. The US power grid, and power infrastructure in Ukraine are both known to have been targeted by state-sponsored Russian hackers.

As more industrial systems are exposed to the public internet, a targeted DDoS attack against these could easily cause outages that interrupt critical power, gas or water supplies (think industry 4.0). And at the other end of the supply chain, Trend Micro’s recent Internet of Things in the Cybercrime Undergroundreport described how hackers are sharing information on how to hack Internet-connected gas pumps and related devices often found in industrial applications. These devices could either be flooded to cause a wide-ranging blackout, or infected and recruited into botnets for use in DDoS attacks, or to manipulate industrial processes. 

APIs are the weakest link

However, DDoS attacks are no longer limited to merely attacking or exploiting organizations’ infrastructure. In 2020, I expect attacks against APIs to move into the spotlight. As we know, more and more organizations are moving workloads into the cloud, and this means that APIs are increasing in volume.

Every single smart device within an IoT ecosystem, for example, is ultimately interacting with an API. And far less bandwidth is needed to attack APIs, and they can rapidly become hugely disruptive bottlenecks. Unlike a traditional DDoS attack which bombards a website or network with bogus traffic so that infrastructure grinds to a halt, an API DDoS attack focuses on specific API requests which generate so much legitimate internal traffic that the system is attacking itself – rather like a massive allergic reaction.  Many cloud-based organizations are vulnerable to this, and APIs are harder to protect using conventional methods.  So I expect attackers to increasingly exploit this vulnerable spot in organizations’ defensive armor.

The cloud is not a safe haven

There is an assumption in the market that migrating workloads to public cloud providers automatically makes businesses better off – and in many ways of course, this is true. Flexibility, scalability, agility, cost-effectiveness – there are myriad business benefits to be gleaned from the cloud. Yet the assumption that the major providers automatically offer attack-proof security is an illusion. In October 2019, AWS was taken offline for eight hours, demonstrating that even the biggest public cloud providers are vulnerable to DDoS attacks, with hugely disruptive potential knock-on effects to their customers. Some studies estimate that knocking out a single cloud provider could already cause $50 billion to $120 billion in economic damage—on a par with the aftermath resulting from Hurricane Katrina and Hurricane Sandy.

In conclusion, these points may paint a bleak picture for 2020. But companies that adopt the mindset of ‘not if, but when’ will be well positioned to counter the escalating threats.  Using solutions which are capable of fending off high-volume DDoS attacks as well as resource-intensive exploits on protocols and application levels, organizations can stay a step ahead of threat actors, and avoid becoming their next victim.

About the author: Marc Wilczek is Chief Operating Officer at Link11, an IT security provider specializing in DDoS protection.

Copyright 2010 Respective Author at Infosec Island]]>
20/20 Vision on 2020's Network Security Challenges Mon, 13 Jan 2020 12:20:00 -0600 As the new year starts, it’s natural to think about the network security challenges and opportunities that organizations are likely to face over the next 12 months – and how they will address them. Of course, we are likely to see brand-new threats emerging and unpredictable events unfolding. But here are four key security challenges that I believe will be at the top of enterprise agendas this year.

Managing misconfigurations

The first challenge that organizations will address is data and security breaches due to misconfigurations. These have been a constant problem for enterprises for decades, with the most recent example being the large-scale incident which impacted Capital One in 2019. These are usually caused by simple human error, leaving a security gap that is exploited by actors from outside the organization. Unfortunately, humans are not getting any more efficient in avoiding mistakes, so breaches due to misconfigurations will continue to be a problem that needs to be fixed.

At the same time, the technology environment that the network security staff is working within is getting ever more complex. There are more network points to secure – both on-premise and in public or private clouds – and therefore a much larger attack surface. The situation is getting worse – as highlighted in our 2019 cloud security survey, which showed that two thirds of respondents use multiple clouds, with 35% using three or more cloud vendors, and over half operating hybrid environments. The only solution to this growing complexity is network security automation. Humans need tools to help them set and manage network configurations more accurately and more efficiently, so the demand for security automation is only going to increase.

Compliance complexity

Achieving and maintaining regulatory compliance has long been a major challenge for networking staff, and as networks become more complex it is only getting harder. In recent years, we have seen a raft of new compliance frameworks introduced across multiple verticals and geographical regions. Regulators worldwide are flexing their muscles.

The crucial point to understand is that new regulations typically don’t replace existing regimes – rather, they add to what is already in place. The list of regulatory demands facing organizations is getting longer and achieving and demonstrating compliance is becoming an ever-larger commitment for organizations.  Once again, the only solution is more automation: Being in “continuous compliance”, with automatic creation of audit-ready reports for all the relevant regulations, delivers both the time and resource savings that organizations need in order to meet their compliance demands.

The turn to intent-based network security

What do I mean by intent-based network security? It is ultimately about asking a simple question – why is this security control configured the way it is?

Understanding the intent behind individual network security rules is crucial for a wide range of network maintenance and management tasks, from responding to data breaches to undertaking network cleanups, from working through vulnerability reports to dealing with planned or unplanned downtime. In every scenario, you need to understand why the security setting is the way it is, and who to notify if something has gone wrong or if you want to amend or remove the rule.

And the answer is always that a particular business application needed connectivity from point A to point B. The organization “just” needs to find out which application that was – and that’s 95% of the intent.

The trouble is that organizations are usually not diligent enough about recording this intent.  The result is a huge number of undocumented rules whose intent is unclear. In other words, organizations are in a ‘brownfield’ situation; they have too many rules, and not enough information about their intent.

So, I believe that this year, we will see more and more deployment of technologies that allow a retrospective understanding of the intent behind security rules, all based on the traffic observed on the network. By listening to this traffic and applying algorithms, these new technologies can reverse-engineer and ultimately identify, and document, the original intent.

Embracing automation

Public cloud vendors are providing more and more security features and controls, and this trend looks set to continue, with more security controls becoming available as part of their core offerings. This is a good thing. The more controls available, the more secure organizations can be – if they take advantage of the additional capabilities.

But this doesn’t mean less work for IT and security teams. They need to take ownership of these new capabilities, and to configure and manage them properly – and this takes us straight back to the misconfiguration issue I outlined earlier.

In conclusion, to distil my predictions for network security over this year into a single point, it would be the need to embrace more automation across all security and compliance-related processes. This is at the core of enabling organizations to manage the ever-growing complexity of their networks and responding to the constantly evolving threat landscape.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Is Cybersecurity Getting Too Complex? Mon, 13 Jan 2020 12:14:18 -0600 Weighing SMB Security Woes Against the Managed Security Promise

Looking strictly at the numbers, it appears small to mid-sized businesses (SMBs) are sinking under the weight of their own IT complexity. To be more efficient and competitive, SMBs are reaching to the same IT solutions that large enterprises consume: hybrid/multi-cloud solutions (61% have a multi-cloud strategy, with 35% claiming hybrid cloud use), remote work tools, and a dizzying array of platforms. But unlike the large enterprise, SMBs often have fewer dedicated information security staff to manage the increasing attack surface these systems create. As if to prove the point, attacks on the SMB are escalating: 66% experienced a cyberattack in the past year, with average incident costs on the rise. In a world where smaller business data is as monetizable as that of the large enterprise, it’s not surprising that bad actors target organizations they may reasonably assume have weaker defenses.

I think it’s safe to say the SMB is keeping pace with their larger brethren in terms of IT complexity (if not scale) but falling short in terms of the methods to keep a handle on it—and they appear to be suffering the consequences.

Are Managed Security Solutions the Answer?

While it appears many SMBs could use a lifeline, the extent to which managed security services (MSS) are that holistic answer requires a deeper analysis of the organization’s unique strengths and weaknesses. Cyber risk is not a simple problem, and solutions are not “one-size-fits-all.” On the plus side, MSS offers companies the ability to quickly augment internal capabilities with a high degree of specialized expertise, tools, and solutions they may lack without having to take on the daily maintenance, hire from a competitive labor pool, or burden existing staff. By outsourcing these capabilities, companies can leverage teams that are highly specialized in security, enabling them to improve their security defenses in key areas at a lower overall cost as measured against the CapEx, OpEx, and time requirements of standing up the same capabilities internally. Any measure of relative costs must also include the value of mitigating cyber risk—such risks, if capitalized upon by malicious actors, carries significant costs of its own.

However, there is a wide range of managed security services out there—and most providers would happily sell them all to every prospective customer. The burden is on the SMB to fully understand whether and in what areas they need that extra support to supplement the tools, people, processes, and capabilities they already have.

Managed Security Services: Assessing for Optimal Value

Most organizations have made investments in information security tools and resources. A few outperformers (usually large enterprises) may already be at best-practice security in many areas, with dedicated staff, their own Security Operations Center and endpoint detection and response capabilities. Such enterprises may have little need to outsource security functions. Others may focus little on security and require across-the-board help. Most organizations will be somewhere in the middle. Ultimately, the goal should be to maximize the use of the investments already made and augment staff with MSS only where you can get the most strategic value for the expenditure.

To begin, organizations should consider executing a security risk assessment—preferably against a security framework such as the NIST Cybersecurity Framework (CSF) or other, potentially required industry-specific framework (HITRUST would be an example in the healthcare sector). These can be conducted in house or via third-party assessment firms. The output should enable the organization to take an in-depth look at their people, processes, and technology and get a realistic view of where their gaps lie. This up-front work should help isolate areas where MSS would be of great value; and it may identify areas where a few investments may be enough to build internal capabilities sufficiently to manage in house. 

At the end of the day, businesses must ensure they have enough resources to do everything from basic blocking and tackling on security—such as log monitoring, patching, sorting through alerts (routine, repetitive, time-consuming tasks) to incident readiness and response and security for endpoints, cloud, and Software as a Service (SaaS), among others. Because the SMB is indeed getting vastly more complex and difficult to defend, this span of specialized security requirements is where gaps often will lie in obvious pockets of both tools and people, leaving direct pointers to where MSS can potentially provide a lifeline.

Managed Security Services for the SMB: The Net-Net

There is no across-the-board answer for whether MSS is right for every SMB and which services offer the most value. Yet applied strategically, MSS can greatly help SMBs bridge the divide between their growing complexity (and associated security vulnerabilities) and that elusive utopia called “Best-Practice Security.” MSS providers do nothing but security and can help address the cybersecurity skills shortage. But to find the right services that complement specific resource gaps, enterprises should first fully assess their own security current state to find out where MSS will add the most value.

About the author: Sam Rubin is a Vice President at The Crypsis Group, where he leads the firm’s Managed Security Services business, assists clients, and develops the firm’s business expansion strategies.

Copyright 2010 Respective Author at Infosec Island]]>
Global Security Threats Organizations Must Prepare for in 2020 Wed, 08 Jan 2020 14:43:21 -0600 As we kickoff a new decade, it's time, once again, to gaze into our crystal ball and look at the year ahead.

In 2020, businesses of all sizes must prepare for the unknown, so they have the flexibility to withstand unexpected and high impact security events. To take advantage of emerging trends in both technology and cyberspace, businesses need to manage risks in ways beyond those traditionally handled by the information security function, since new attacks will most certainly impact both shareholder value and business reputation.

After reviewing the current threat landscape, there are three dominant security threats that businesses need to prepare for in 2020. These include, but are not limited to:

  • The Race for Technology Dominance 
  • Third Parties, the Internet of Things (IoT) and the Cloud 
  • Cybercrime – Criminals, Nation States and the Insider

An overview for each of these areas can be found below:

The Race for Technology Dominance 

Technology has changed the world in which we live. Old norms are changing, and the next industrial revolution will be entirely technology driven and technology dependent. In short, technology will enable innovative digital business models and society will be critically dependent on technology to function. Intellectual property will be targeted as the battle for dominance rages. 

Evidence of fracturing geopolitical relationships started to emerge in 2018 demonstrated by the US and China trade war and the UK Brexit. In 2020, the US and China will increase restrictions and protectionist measures in pursuit of technology leadership leading to a heightened digital cold war in which data is the prize.  This race to develop strategically important next generation technology will drive an intense nation-state backed increase in espionage. The ensuing knee jerk reaction of a global retreat into protectionism, increased trade tariffs and embargos will dramatically reduce the opportunity to collaborate on the development of new technologies. The UK’s exclusion from the EU Galileo satellite system, as a result of the anticipated Brexit, is one example.

New regulations and international agreements will not be able to fully address the issues powered by advances in technology and their impact on society.  Regulatory tit for tat battles will manifest across nation states and, rather than encourage innovation, is likely to stifle and constrain new developments, pushing up costs and increasing the complexity of trade for multinational businesses.

Third Parties, the IoT and the Cloud 

A complex interconnection of digitally connected devices and superfast networks will prove to be a security concern as modern life becomes entirely dependent on technology. Highly sophisticated and extended supply chains present new risks to corporate data as it is necessarily shared with third party providers. IoT devices are often part of a wider implementation that is key to the overall functionality.

Few devices exist in isolation, and it is the internet component of the IoT that reflects that dependency. For a home or commercial office to be truly 'smart', multiple devices need to work in cooperation. For a factory to be 'smart', multiple devices need to operate and function as an intelligent whole. However, this interconnectivity presents several security challenges, not least in the overlap of consumer and operational/industrial technology.

Finally, since so much of our critical data is now held in the cloud, opening an opportunity for cyber criminals and nation states to sabotage the cloud, aiming to disrupt economies and take down critical infrastructure through physical attacks and operating vulnerabilities across the supply chain. 

Cybercrime – Criminals, Nation States and the Insider

Criminal organizations have a massive resource pool available to them and there is evidence that nation states are outsourcing as a means of establishing deniability. Nation states have fought for supremacy throughout history, and more recently, this has involved targeted espionage on nuclear, space, information and now smart technology. Industrial espionage is not new and commercial organizations developing strategically important technologies will be systematically targeted as national and commercial interests blur. Targeted organizations should expect to see sustained and well-funded attacks involving a range of techniques such as zero-day exploits, DDoS attacks and advanced persistent threats.

Additionally, the insider threat is one of the greatest drivers of security risks that organizations face as a malicious insider utilizes credentials to gain access to a given organization’s critical assets. Many organizations are challenged to detect internal nefarious acts, often due to limited access controls and the ability to detect unusual activity once someone is already inside their network. 

The threat from malicious insider activity is an increasing concern, especially for financial institutions, and will continue to be so in 2020.

Don’t Get Left Behind

Today, the stakes are higher than ever before, and we’re not just talking about personal information and identity theft anymore. High level corporate secrets and critical infrastructure are constantly under attack and organizations need to be aware of the emerging threats that have shifted in the past year, as well as those that they should prepare for in the coming year.

By adopting a realistic, broad-based, collaborative approach to cyber-security and resilience, government departments, regulators, senior business managers and information security professionals will be better able to understand the true nature of cyber-threats and respond quickly and appropriately. This will be of the highest importance in 2020 and beyond.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.


Copyright 2010 Respective Author at Infosec Island]]>
Six Reasons for Organizations to Take Control of Their Orphaned Encryption Keys Thu, 19 Dec 2019 15:32:54 -0600 A close analysis of the cybersecurity attacks of the past shows that, in most cases, the head of the cyber kill chain is formed by some kind of privilege abuse. In fact, Forrester estimates that compromised privileged credentials play a role in at least 80 percent of data breaches. This is the reason privileged access management (PAM) has gained so much attention over the past few years. With securing and managing access to business-critical systems at its core, PAM aims to provide enterprises with a centralized, automated mechanism to regulate access to superuser accounts. PAM solutions ideally do this by facilitating end-to-end management of the privileged identities that grant access to these accounts.

However, the scope of privileged access security is often misconceived and restricted to securing and managing root account passwords alone. Passwords, beyond a doubt, are noteworthy privileged access credentials. But the constant evolution of technology and expanding cybersecurity perimeter calls for enterprises to take a closer look at the other avenues of privileged access, especially encryption keys—which despite serving as access credentials for huge volumes of privileged accounts, are often ignored. 

This article lays focus on the importance encryption key management—why enforcing SSH key and SSL certificate management is vital, and how by doing so, you can effectively bridge the gaps in your enterprise privileged access security strategy. 

1. Uncontrolled numbers of SSH keys trigger trust-based attacks

The average organization houses over 23,000 keys and certificates many of which grant sweeping access to root accounts, says a Ponemon survey. Also, a recent report about the Impact of unsecured digital identities states that 71% of the respondents did not have any idea about the number of keys or the extent of their access within the organization. Without a centralized key management approach, anybody in the network can create or duplicate any number of keys. These keys are often randomly generated as needed and are soon forgotten once the task they are associated with is done. Malicious insiders can take advantage of this massive ocean of orphaned SSH keys to impersonate admins, hide comfortably using encryption, and take complete control of target systems.

2. Static keys create permanent backdoors

Enterprises should periodically rotate their SSH keys to avoid privilege abuse, but huge volumes of unmanaged SSH keys make key rotation an intimidating task for IT administrators. Moreover, due to a lack of proper visibility on which keys can access what, there is widespread apprehension about rotating keys in fear of accidentally blocking access to critical systems. This leads to a surge of static SSH keys, which have the potential to function as permanent backdoors. 

3. Unintentional key duplication increases the chance of privilege abuse

For the sake of efficiency, SSH keys are often duplicated and circulated among various employees in an organization. Such unintended key duplication creates a many-to-many key-user relationship, which highly increases the possibility of privilege abuse. This also makes remediation a challenge since administrators have to spend a good amount of time revoking keys to untangle the existing relationships before creating and deploying fresh, dedicated key pairs.

4. Failed SSL certificate renewals hurt your brand's credibility

SSL certificates, unlike keys, have a set expiration date. Failing to renew SSL certificates on time can have huge implications on website owners as well as end users. Browsers don't trust websites with expired SSL certificates; they throw security error messages when end users try to access such sites. One expired SSL certificate can drive away potential customers in an instant, or worse, lead to personal data theft for site visitors. 

5. Improper SSL implementations put businesses at risk

Many businesses rely completely on SSL for internet security, but they often don't realize that a mere implementation of SSL in their network is not enough to eliminate security threats. SSL certificates need to be thoroughly examined for configuration vulnerabilities after they are installed. When ignored, these vulnerabilities act as security loopholes which cybercriminals exploit to manipulate SSL traffic and launch man-in-the-middle (MITM) attacks.

6. Weak certificate signatures go unheeded

The degree of security provided by any SSL certificate depends on the strength of the hashing algorithm used to sign the certificate. Weak certificate signatures make them vulnerable to collision attacks. Cybercriminals exploit such vulnerabilities to launch MITM attacks and eavesdrop on communication between users and web servers. Organizations need to isolate certificates that bear weak signatures and replace them with fresh certificates containing stronger signatures. 

Bridging the gaps in your PAM strategy

All the above scenarios highlight how important it is to widen the scope of your privileged access security strategy beyond password management. Even with an unyielding password manager in place, cybercriminals have plenty of room to circumvent security controls and gain access to superuser accounts by exploiting various unmanaged authentication identities, including SSH keys and SSL certificates. Discovering and bringing all such identities that are capable of granting privileged access under one roof is one important step enterprises should take to bridge gaps in their privileged access security strategy. For, today's unaccounted authentication identities could become tomorrow's stolen privileged credentials!

About the author: Shwetha Sankari is an IT security product consultant at ManageEngine. With key area of expertise in content marketing, she spends her time researching the latest trends in the IT security industry and creating informative user education content.

Copyright 2010 Respective Author at Infosec Island]]>
The Cybersecurity Skills Gap: An Update for 2020 Wed, 18 Dec 2019 00:11:22 -0600 The gap in trained, experienced cybersecurity workers is one of those perennial problems: much ink is spilled every year in assessing the scale of the problem, and what can be done about it. We have recently pointed out, for instance, the importance of stopping attacks before they happen, and the fact that you can’t hire your way out of the skills shortage.

As we move into 2020, it's apparent that despite this focus on the problem, it has not been solved. There is still a huge skills gap when it comes to cybersecurity, and in many ways, it is getting worse. According to Cyber Crime Magazine, there may be as many as 3.5 million unfilled cybersecurity jobs by 2021, and recent high-profile cyber breaches provide further evidence that the problem is already becoming acute.

That said, there are some new trends emerging when it comes to managing this crisis. In this article, we'll take a look at some of the innovative ways that companies are getting around the problem.

The Widening Gap

First, some context. At the most basic level, the skills gap in cybersecurity is the product of a simple fact: there are more cybersecurity positions that need to be filled than there are qualified graduates to fill them. This is despite colleges encouraging students to study cybersecurity, and despite companies encouraging their existing employees to retrain.

Look a little deeper, however, and some other reasons for the shortage becomes apparent. One is that a worrying number of qualified professionals are leaving the cybersecurity sector. At cybersecurity conferences, it’s not uncommon to see entire tracks about managing mental health, addiction, and work stress. As these experienced professionals leave the sector, this puts more pressure on younger, less experienced colleagues.

Secondly, a major source of stress for cybersecurity professionals is that they are often assigned total (or at least partial) responsibility for the losses caused by data breaches. In many cases, this is unfair, but persists because many companies still see "security" as a discrete discipline that can be dealt with in isolation from other IT tasks, corporate processes, and reputation management.

Training and Development

Addressing these issues requires more than just increasing the number of qualified graduates. Instead, businesses need to take more innovative approaches to hire, train, and retain cybersecurity staff.

These approaches can be broken down into three types. The first is that cybersecurity training needs to change from an event into a process. Some have argued that traditional, classroom-based cybersecurity training doesn’t reflect the field and that this training needs to be delivered in a more vocational way. Instead of hiring one cybersecurity expert, companies should look to train all of their employees in the basics of cybersecurity. 

In fact, even cybersecurity professionals might benefit from this type of training. Despite companies being resistant to spending more on employee training, investing in training has one of the highest ROI that investors can make. In addition, recent developments have made it clear that continuous training is needed – concerns about the security implications of 5G networks, for example, are now forcing seasoned professionals to go back to school.

Secondly, dramatic gains in cybersecurity can be achieved without employing dedicated staff. One of the major positive outcomes of the cybersecurity skills gap, in fact, has been the proliferation of free, easy to use security tools (like VPNs and secure browsers), which aim to make cybersecurity "fool-proof", even for staff with little or no technical training. These tools can be used to limit the risk of cyberattacks without the necessity of complex (and expensive) dedicated security solutions.

Third, the rise of "security as a service" suggests that the cybersecurity sector of the future is one that relies on outsourcing and subcontracting. Plenty of companies already outsource business processes that would have been done in-house just a few years ago – everything from creating a website to outsourcing pen testing – and taking this approach may provide a more efficient way to use the limited cybersecurity professionals that are available. 

AI Tools: The Future?

Another striking feature of the cybersecurity skills debate, and one which is especially apparent as we move into 2020, is the level of discussion around AI tools. 

Unfortunately, assessing the level of efficacy of AI tools when it comes to improving cybersecurity is difficult. That's because many cybersecurity professionals are skeptical when it comes to AI is a useful ally in this fight. In some ways, they are undoubtedly correct: in a recent study, one popular AI-powered antivirus was defeated with just a few lines of text appended to popular malware.

On the other hand, it must be recognized that cybersecurity pros have a vested interest in talking down how effective AI tools are. If AIs were able to protect networks on their own, after all, cybersecurity pros would be out of a job. Or rather they would be if there were not so many unfilled cybersecurity vacancies.

Ultimately, given the lack of qualified or trained professionals, AI tools are likely to continue to be a major focus of investment for companies from 2020 onwards. This, in turn, entails that IT professionals overcome some of their reticence about working with them, and begin to see AIs less as competitors and more as collaborators.

The Bottom Line

It's also worth pointing out that the individual trends we've mentioned can be seen as working against each other. In some cases, companies have attempted to overcome the skills gap by training large numbers of employees to perform cybersecurity roles. Others have gone in the other direction – outsourcing specific aspects of their cybersecurity to hyper-specialized companies. Others are taking a gamble that AI tools are going to eventually replace the need for (at least some of their) cybersecurity professionals.

Which of these trends is eventually going to dominate the market remains to be seen, but one thing is clear: 2020 is a critical juncture for the entire cybersecurity sector.

Copyright 2010 Respective Author at Infosec Island]]>
Modernizing Web Filtering for K-12 Schools Wed, 18 Dec 2019 00:05:19 -0600 In today’s 24/7 internet access world, network administrators need next-generation web filtering to effectively allow access to the internet traffic they do want, and stop the traffic they don’t want. How does this affect the education vertical, with students in K-12? Well, for starters, a lot has changed since the Children’s Internet Protection Act (CIPA) was enacted in 2000. Adding on to dated Acts, let’s not forget that almost two-decades later, the landscape in academics has shifted drastically. We are no longer working from computer labs on a single-network, we are in the world of various personal devices expecting consistent wi-fi and cloud access.

The internet is insurmountable - and as it continues to rapidly evolve, so should the filtering tactics used to keep students safe. But while the law requires schools and public libraries to block or filter obscene and harmful content, that still leaves room for interpretation.

How Much Web Filtering is Too Much?

A 2017 survey shows that 63% of K-12 teachers regularly use laptops and computers in the classroom, making the topic of web filtering in K-12 environments crucial. With the rise of tech-savvy students and classroom settings, precautions must be taken, however, there is such a thing as ‘over-filtering’ and ‘over-blocking.’ 

Current laws and guidelines that prevent students from accessing crucial learning and research materials, have become a rising issue that schools and parents are constantly battling with the FCC. As mentioned on the Atlantic, excessive filtering can limit students research on topics that can be useful to, for example, debate teams or students seeking anti-bullying resources. Instead of enforcing the same rules across the entire school or district, network administrators need to develop a solution that offers flexibility and customizable options, pinpointing specific websites, applications and categories that each grade level may access.

Working Together to Clearly Define Web Access

In the past, schools practiced the over-zealous “block everything” approach. Now, it is important for school administrators and IT departments to collectively work together to define web-access by grade, age, project duration and keyword search. This allows students access to educational resources while administrators maintain acceptable parameters in-place - blocking inappropriate content from sites or applications

Assessing Network Necessities

Academic boards can take it one-step further putting access controls on all school networking, including wi-fi networks to control the use of personal devices during school hours.

In addition to Web Filtering, adding controls such as enforcing safe search on popular search engines, and using restricted mode on YouTube will increase productivity, limit cyberbullying, and deny access to students searching for ways to inflict self-harm or perform other acts of violence.

Why limit students education by blocking crucial learning and research materials? By custom-configuring a network to meet the needs of each grade-level and classroom, educators are encouraging students to become academically resourceful. IT departments and school administrators must form a partnership to generate a solution that will allow students, teachers, and administrators access to the educational tools they need.

It’s time to break down the glass wall and acknowledge the presence of educational materials and information that is now available through various media channels and platforms. The internet which was once a luxury accessible to only a few, is now an amenity available to almost anyone - including young students - signifying the importance of fine-tuned web filters and content security across K-12 networks.

Copyright 2010 Respective Author at Infosec Island]]>
University of Arizona Researchers Going on Offense and Defense in Battle Against Hackers Wed, 04 Dec 2019 12:01:48 -0600 The global hacker community continues to grow and evolve, constantly finding new targets and methods of attack. University of Arizona-led teams will be more proactive in the battle against cyberthreats thanks to nearly $1.5 million in grants from the National Science Foundation.   The first grant, for nearly $1 million, will support research contributing to an NSF program designed to protect high-tech scientific instruments from cyberattacks. Hsinchun Chen, Regents Professor of management information systems at the Eller College of Management, says the NSF's Cybersecurity Innovation for Cyberinfrastructure program is all about protecting intellectual property, which hackers can hold for ransom or sell on the darknet.   "You have infrastructure for people to collect data from instruments like telescopes," Chen said. "Scientists use that to collaborate in an open environment. Any environment that is open has security flaws."   A major hurdle to protecting scientific instruments, Chen said, is that the risks to science facilities have not been properly analyzed. He will lead a team using artificial intelligence to study hackers and categorize hundreds of thousands of risks, then connect those risks to two partner facilities at the University of Arizona.   Chen's team is working with CyVerse, a national cyberinfrastructure project led by the University of Arizona, and the Biosphere 2 Landscape Evolution Observatory project. CyVerse develops infrastructure for life sciences research and had researchers involved in this year's black hole imaging. Biosphere 2's LEO project collects data from manmade landscapes to study issues including water supply and how climate change will impact arid ecosystems.   The team will comb through hacker forums to find software tools designed to take advantage of computer system flaws, scan CyVerse and LEO internal and external networks, and then link specific tools found in the forums to specific network vulnerabilities.   "The University of Arizona is a leader in scientific discovery, and we are actively working on solutions to the world's biggest challenges. To do that, it is imperative to keep our state-of-the-art instruments safe from cyberattacks," said UArizona President Robert C. Robbins. "Hsinchun Chen is once again at the forefront of innovation in cybersecurity infrastructure, and this funding will help ensure the data and discoveries at CyVerse and Biosphere 2 are protected, which ultimately enables our researchers to keep working toward a bright future for us all."   Chen's co-principal investigators on the project include: Mark Patton, senior lecturer in management information systems; Peter Troch, science director at Biosphere 2; Edwin Skidmore, director of infrastructure at the BIO5 Institute, which houses CyVerse; and Sagar Samtani, assistant professor in the University of South Florida information systems and decision sciences department and one of Chen's former students.   Chen is also leading an effort to improve the process of collecting and analyzing data from international hacker communities. The NSF, through its Secure and Trustworthy Cyberspace program, has awarded a $500,000 grant to Chen and a team of researchers to gather and analyze data on emerging threats in international hacker markets operating in Russia and China.   "We're creating infrastructure and technologies based on artificial intelligence to study darknet markets," Chen said, "meaning the places where you can buy credit cards, malware to target particular industries or government, service to hack other people, opioids, drugs, weapons — it's all part of the dark web."   The effort will focus on developing techniques to address challenges in combating international hacking operations, including the ability to collect massive amounts of data and understand common hacker terms and concepts in other countries and languages.   Chen's co-principal investigator on the research is Weifing Li, assistant professor of management information systems at the University of Georgia.   SOURCE: The University of Arizona Copyright 2010 Respective Author at Infosec Island]]> Securing the Internet of Things (IoT) in Today's Connected Society Tue, 19 Nov 2019 09:16:54 -0600 The Internet of Things (IoT) promises much: from enabling the digital organization, to making domestic life richer and easier. However, with those promises come risks: the rush to adoption has highlighted serious deficiencies in both the security design of IoT devices and their implementation.

Coupled with increasing governmental concerns around the societal, commercial and critical infrastructure impacts of this technology, the emerging world of the IoT has attracted significant attention.

While the IoT is often perceived as cutting edge, similar technology has been around since the last century. What has changed is the ubiquity of high-speed, low-cost communication networks, and a reduction in the cost of compute and storage. Combined with a societal fascination with technology, this has resulted in an expanding market opportunity for IoT devices, which can be split into two categories: consumer and industrial IoT.

Consumer IoT

Consumer IoT products often focus on convenience or adding value to services within a domestic or office environment, focusing on the end user experience and providing a rich data source that can be useful in understanding consumer behavior.

The consumer IoT comprises a set of connected devices, whose primary customer is the private individual or domestic market. Typically, the device has a discrete function which is enabled or supplemented by a data-gathering capability through on-board sensors and can also be used to add functionality to common domestic items, such as refrigerators. Today’s 'smart' home captures many of the characteristics of the consumer IoT, featuring an array of connected devices and providing a previously inaccessible source of data about consumer behavior that has considerable value for organizations.

Whilst the primary target market for IoT devices is individuals and domestic environments, these devices may also be found in commercial office premises – either an employee has brought in the device or it has been installed as an auxiliary function.

Industrial IoT

Industrial IoT deployments offer tangible benefits associated with digitization of processes and improvements in supply chain efficiencies through near real-time monitoring of industrial or business processes.

The industrial IoT encompasses connected sensors and actuators associated with kinetic industrial processes, including factory assembly lines, agriculture and motive transport. Whilst these sensors and actuators have always been prevalent in the context of operational technology (OT), connectivity and the data processing opportunities offered by cloud technologies mean that deeper insight and near real-time feedback can further optimize industrial processes. Consequently, the industrial IoT is seen as core to the digitization of industry.

Examples of industrial usage relevant to the IoT extend from manufacturing environments, transport, utilities and supply chain, through to agriculture.

The IoT is a Reality

The IoT has become a reality and is already embedded in industrial and consumer environments. It will further develop and become a critical component of not just modern life, but critical services. Yet, at the moment, it is inherently vulnerable, often neglects fundamental security principles and is a tempting attack target. This requires a change.

There is a growing momentum behind the need for change, but a lot of that momentum is governmental and regulatory-focused which, as history tells us, can be problematical. The IoT can be seen as a form of shadow IT, often hidden from view and purchased through a non-IT route. Hence, responsibility for its security is often not assigned or misassigned. There is an opportunity for information security to take control of the security aspects of the IoT, but this is not without challenges: amongst them skills and resources. Nevertheless, there is a window of opportunity to tame this world, by building security into it. As most information security professionals will know, this represents a cheaper and less disruptive option than the alternative.

In the face of rising, global security threats, organizations must make systematic and wide-ranging commitments to ensure that practical plans are in place to acclimate to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.Enterprises with the appropriate expertise, leadership, policy and strategy in place will be agile enough to respond to the inevitable security lapses. Those who do not closely monitor the growth of the IoT may find themselves on the outside looking in.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
What Is Next Generation SIEM? 8 Things to Look For Thu, 14 Nov 2019 07:59:00 -0600 The SIEM market has evolved and today most solutions call themselves “Next Generation SIEM.” Effective NG SIEM should provide better protection and equally important, if not more, a much more effective, next gen user experience. What you should look for when evaluating a next generation SIEM?

The state of cybersecurity has evolved one threat at a time, with organizations constantly adding new technologies to combat new threats. The result? Organizations are left with complex and costly infrastructures made up of many products that are out of sync with one another, and thus simply cannot keep pace with the velocity of today’s dizzying threat landscape.

Traditional security information and event management (SIEM) solutions tried to make sense of the mess but fell short. Then came “Next Generation SIEM” or NG-SIEM. No vendor today will admit that they sell legacy SIEM, but there is no ISO style organization doling out official NG SIEM stamps of approval. So how is a security professional to know if the technology in front of him or her really brings the benefits they need, or if it’s just another legacy vendor calling itself NG-SIEM?

The basic capabilities of legacy SIEM are well known – data ingestion, analytics engines, dashboards, alerting and so on. But with these legacy SIEM capabilities your security team will still drown in huge amounts of logs. That’s because even many NG-SIEMs in the market still let copious amounts of threats and logs pass through – straight to the doorstep of your security team.

Working Down the Pyramid

A true Next Generation SIEM will enable the security team to work from the top down, rather than bottom up. If we look at the above pyramid, most security analysts have to sift through the bottom layer of logs and alerts – or create manual correlation rules for new attacks that can then move logs up the pyramid. This is extremely time-consuming and frustrating. Essentially security teams (especially small teams of one or two analysts) simply don’t have the bandwidth to go through all the logs, meaning attacks slip through the cracks (and analysts burn out).

Artificial Intelligence technologies available today can help to automatically create correlation rules for existing attacks - and even new attacks - before they occur. The significance of this for security teams is enormous: It means they can begin at the top of the pyramid by going through a small number of logs.  For those threats the analyst deems require further examination, the mid-level and raw data needs to be readily available and easily searchable. 

The Checklist for NG-SIEM

To make sure your NG-SIEM of choice will be effective, look for the following capabilities:

  1. Data lake – a solution that is able to ingest ALL types of data from various sources, making sure data retention can be supported, with very high search performance, including securing the data in transit and at rest.
  2. Data classification – relies on structured and un-structured data classification technologies (such as NLP) in order to sort all collected data into classes of security groups such as MITRE techniques and tactics – representing the data through one language. This will allow much faster investigation.
  3. Behavioral analytics – Built in NTA and UEBA engines. These engines by themselves lack the ability to cover the entire cyber kill chain, therefore need to be part of the NG-SIEM in order to allow correlating them with other signals, thus reducing the noise that typifies them.
  4. Auto-Investigation (or SOAR) can mean many things. The bottom line is that effective auto-investigation needs both to perform prioritization (entity prioritization, supporting all identity types including ip, host, user, email, etc.) and allow impact analysis. Impact analysis is the ability to analyze the level of actual or potential impact that each risk-prioritized entity has on the organization, so that response actions can be prioritized effectively.  
  5. Auto-Mitigation – will not necessarily be implemented on day one, however, a NG-SIEM must have the ability to automatically execute mitigation actions, even if these, in the beginning, are triggered in very narrow security use cases.
  6. Automation – Automation – Automation – nothing can be 100% automated, but in general the NG-SIEM Vendor needs to present at least 80% automation of the legacy SIEM operations. Otherwise we are missing the whole point of what NG-SIEM is all about, supporting the data pyramid approach.
  7. Data relevancy analyst support tools – Manual investigation will always be part of the analyst’s job. A NG-SIEM must present search and hunting tools that support the analyst’s advanced investigation actions, and response. In this way the NG-SIEM will support the analyst efficiently in their route of investigating the data from the top of the pyramid, through only the relevant (related) information at the bottom of it. This way we make sure advanced investigations are done quickly and efficiently.
  8. Community - solutions which have an opensource component will create a dynamic avenue for constant improvement of the NG-SIEM, through community contributions.

All of the above will create a SIEM with a user experience which allows security analysts to work top down rather than bottom up, starting with the highest risk data.

A SIEM platform that can tick off all these boxes will provide performance that is truly “next generation” and enable the organization to respond faster to relevant threats, at lower cost, improved ROI, and will make for a stable and happy security team.

About the author: Avi Chesla is the founder and CEO of empow ( - a cyber security startup distrupting the SIEM category with our "no rules" AI and NLP based i-SIEM, integrated with the Elastic Stack. Before empow he was CTO at Radware. Avi holds 25 patents in the cyber security arena.

Copyright 2010 Respective Author at Infosec Island]]>