Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 BlackWater Campaign Linked to MuddyWater Cyberspies Tue, 21 May 2019 14:48:00 -0500 A recently discovered campaign shows that the cyber-espionage group MuddyWater has updated tactics, techniques and procedures (TTPs) to evade detection, Talos’ security researchers report. 

MuddyWater was first detailed in 2017 and has been highly active throughout 2018. The cyber-spies have been focused mainly on governmental and telco targets in the Middle East (Iraq, Saudi Arabia, Bahrain, Jordan, Turkey and Lebanon) and nearby regions (Azerbaijan, Pakistan and Afghanistan).

The recently observed campaign, which Talos calls BlackWater, aims to install a PowerShell-based backdoor onto the victim’s machine, for remote access. Analyzed samples show that, while the actor made changes to bypass security controls, the underlying code was unchanged. 

Observed modifications include the use of an obfuscated VBA script to establish persistence as a registry key and trigger a PowerShell stager. The stager would connect to the attacker’s server to obtain a component of the open-source FruityC2 agent script to further enumerate the host machine. 

The gathered data is then sent to a different command and control (C&C) server, in the URL field, in another attempt to make host-based detection more difficult. Moreover, recent samples show that the actor aimed to replace some variable strings, likely in an attempt to avoid signature-based detection. 

MuddyWater-associated samples observed in the February - March timeframe revealed that, after achieving persistence, the actor used PowerShell commands for reconnaissance. The samples also contained the IP address of the C&C server. 

These components were found in a Trojanized attachment sent to the victim, which allowed security researcher to easily analyze the attacks by obtaining a copy of the document. 

Activity observed in April, however, “would require a multi-step investigative approach,” Talos noted. A malicious document used last month and believed to be associated with MuddyWater contained a password-protected and obfuscated macro titled "BlackWater.bas". 

The macro contains a PowerShell script to persist in the "Run" registry key, and call the file “SysTextEnc.ini” every 300 seconds. The clear text version of the file, the security researchers say, appears to be a lightweight stager.

The stager would connect to a C&C server at hxxp://38[.]132[.]99[.]167/crf.txt. The clear text version of the crf.txt, Talos says, closely resembles a PowerShell agent previously used by the group. It only shows small changes, likely made to avoid detection. 

PowerShell commands derived from FruityC2 were then used to call Windows Management Instrumentation (WMI) and gather system information such as operating system name, OS architecture, operating system’s caption, domain and username, and the machine’s public IP address. 

The only command that did not call WMI would attempt to obtain the security system’s MD5 hash, which was likely used to uniquely identify the machine in case multiple workstations were compromised within the same network. 

“Despite last month's report on aspects of the MuddyWater campaign, the group is undeterred and continues to perform operations. Based on these observations, as well as MuddyWater's history of targeting Turkey-based entities, we assess with moderate confidence that this campaign is associated with the MuddyWater threat actor group,” Talos concludes. 

Related: Kaspersky Analyzes Hacking Group's Homegrown Attack Tools

Related: Highly Active MuddyWater Hackers Hit 30 Organizations in 2 Months


Copyright 2010 Respective Author at Infosec Island]]>
Privilege Escalation Flaws Impact Wacom Update Helper Fri, 17 May 2019 09:57:37 -0500 Talos’ security researchers have discovered two security flaws in the Wacom update helper that could be exploited to elevate privileges on a vulnerable system.

The update helper tool is being installed alongside the macOS application for Wacom tablets. Designed for interaction with the tablet, the application can be managed by the user.

What the security researchers have discovered is that an attacker with local access could exploit these vulnerabilities to leverage their privileges to root.

Tracked as CVE-2019-5012 and featuring a CVSS score of 7.8, the first bug was found in the Wacom, driver version 6.3.32-3, update helper service in the startProcess command.

The command, Talos explains, takes a user-supplied script argument and executes it under root context. This could allow a user with local access to raise their privileges to root.

The second security flaw is tracked as CVE-2019-5013 and features a CVSS score of 7.1. It was found in the Wacom update helper service in the start/stopLaunchDProcess command.

“The command takes a user-supplied string argument and executes launchctl under root context. A user with local access can use this vulnerability to raise load arbitrary launchD agents,” Talos reveals.

Attackers looking to target these vulnerabilities would need local access to a vulnerable machine for successful exploitation.

According to the security researchers, Wacom driver on macOS, versions and are affected by these vulnerabilities.

Wacom has already released version 6.3.34, which addresses these bugs.

Related: Cisco Finds Serious Flaws in Sierra Wireless AirLink Devices

Related: Hard-Coded Credentials Found in Alpine Linux Docker Images

Related: Multiple Vulnerabilities Fixed in CUJO Smart Firewall

Copyright 2010 Respective Author at Infosec Island]]>
Answering Tough Questions About Network Metadata and Zeek Wed, 08 May 2019 14:53:49 -0500 We often receive questions about our decision to anchor network visibility to network metadata as well as how we choose and design the algorithmic models to further enrich it for data lakes and even security information and event management (SIEMs).

The story of Goldilocks and the Three Bears offers a pretty good analogy as she stumbles across a cabin in the woods in search of creature comforts that strike her as being just right.

As security operations teams search for the best threat data to analyze in their data lakes, network metadata often lands in the category of being just right.

Here’s what I mean: NetFlow offers incomplete data and was originally conceived to manage network performance. PCAPs are performance-intensive and expensive to store in a way that ensures fidelity in post-forensics investigations. The tradeoffs between NetFlow and PCAPs leaves security practitioners in an untenable state.

NetFlow: Too little

As the former Chief of Analysis for US-CERT has recommended: “Many organizations feed a steady stream of Layer 3 or Layer 4 data to their security teams. But what does this data, with its limited context, really tell us about modern attacks? Unfortunately, not much.”

That’s NetFlow.

Originally designed for network performance management and repurposed for security, NetFlow fails when used in forensics scenarios. What’s missing are attributes like port, application, and host context that are foundational to threat hunting and incident investigations.

What if you need to go deep into the connections themselves? How do you know if there are SMBv1 connection attempts, the main infection vector for WannaCry ransomware? You might know if a connection on Port 445 exists between hosts, but how do you see into the connection without protocol-level details?

You can’t. And that’s the problem with NetFlow.

PCAPs: Too much

Used in post-forensic investigations, PCAPs are handy for payload analysis and to reconstruct files to determine the scale and scope of an attack and identify malicious activity.

However, an analysis of full PCAPs in Security Intelligence explains how the simplest networks would require hundreds of terabytes, if not petabytes, of storage for PCAPs.

Because of that – not to mention the exorbitant cost – organizations that rely on PCAPs rarely store more than a week’s worth of data, which is useless when you have a large data like. A week’s worth of data is also insufficient when you consider that security operations teams don’t often know for weeks or months that they’ve been breached.

Add to that the huge performance degradation – I mean frustratingly slow – when conducting post-forensic investigations across large data sets. Why would anyone pay to store PCAPs in return for lackluster performance?

Network metadata: Just right

The collection and storage of network metadata strikes a balance that is just right for data lakes and SIEMs.

Zeek-formatted metadata gives you the proper balance between network telemetry and price/performance. You get rich, organized and easily searchable data with traffic attributes relevant to security detections and investigation use-cases (e.g. the connection ID attribute).

Metadata also enables security operations teams to craft queries that interrogate the data and lead to deeper investigations. From there, progressively targeted queries can be constructed as more and more attack context is extracted.

And it does so without the performance and big-data limitations common with PCAPs. Network metadata reduces storage requirements by over 99%, compared to PCAPs. And you can selectively store the right PCAPs, requiring them only after metadata-based forensics have pinpointed payload data that is relevant.

The perils of managing your own Bro/Zeek deployment

Another question customers often ask us is whether they should manage their own Bro/Zeek deployments. The answer is best explained through the experience of one of our government customers, which chose to deploy and manage it themselves.

At the time, the rationale was reasonable: Use in-house resources for a one-time, small-scale deployment, and incrementally maintain it with the rest of the infrastructure while providing significant value to their security team.

But over time, it became increasingly untenable:

  • It was difficult to keep it tuned. Each patch or newly released version required the administrator to recompile a binary and redeploy. 
  • It became difficult to scale. While partially an architectural decision, sensors can rarely scale by default – especially those that push much of the analytics and processing to the sensor. We don’t see many deployments that can even operate at 3 Gbps per sensor. Over time, the sensors began to drop packets. The customer had to suddenly architect clusters to support the required processing.
  • It was painfully difficult to manage legions of distributed sensors across multiple geographic locations, especially when sensor configurations were heterogeneous. When administrators who were familiar with the system left, a critical part of their security infrastructure was left unmanaged.

This no-win tradeoff drives many customers to ask us how their security teams can better spend their time. Should they manually administer tools (a.k.a. barely keeping afloat) in a self-managing fashion or focus on being security experts and threat hunters?

In addition to the deployment challenges for those who opt for the self-managed approach, day-to-day operational requirements like system monitoring, system logging and even front-end authentication pose a heavy burden.

Most make the choice to find a partner that can simplify the complexity of such a deployment: Accelerate time to deployment, enable automatic updates that eliminate the need to regularly patch and maintain, and perform continuous system monitoring.

These are default capabilities that free you to focus on the original charter of your security team.

About the author: Kevin Sheu leads product marketing at Vectra. During the past 15 years, he has held executive leadership roles in product marketing and management consulting experience, where he has demonstrated a passion for product innovations and how they are adopted by customers. Kevin previously led growth initiatives at Okta, FireEye and Barracuda Networks.

Copyright 2010 Respective Author at Infosec Island]]>
Qakbot Trojan Updates Persistence, Evasion Mechanism Mon, 06 May 2019 12:11:02 -0500 The Qakbot banking Trojan has updated its persistence mechanism in recent attacks and also received changes that potentially allow it to evade detection, Talos’ security researchers say. 

Also known as Qbot and Quakbot, the Trojan has been around for nearly a decade, and has received a variety of changes over time to remain a persistent threat, although its functionality remained largely unaltered. 

Known for the targeting of businesses to steal login credentials and eventually drain their bank accounts, the malware has received updates to the scheduled task it uses to achieve persistence on the infected systems, which also allows it to evade detection. 

The Trojan typically uses a dropper to compromise a victim’s machine. During the infection process, a scheduled task is created on the victim machine to execute a JavaScript downloader that makes a request to one of several hijacked domains.

A spike in requests to these hijacked domains observed on April 2, 2019 (which follows DNS changes made to them on March 19) suggests that the threat actor has made updates to the persistence mechanism only recently, in preparation for a new campaign. 

The downloader requests the URI "/datacollectionservice[.]php3." from the hijacked domains, which are XOR encrypted at the beginning of the JavaScript. The response is also obfuscated, with the transmitted data saved as (randalpha)_1.zzz and (randalpha)_2.zzz and decrypted using a code contained in the JavaScript downloader.

At the same time, a scheduled task is created to execute a batch file. The code reassembles the Qakbot executable from the two .zzz files, using the type command, after which the two .zzz files are deleted. 

The changes in the infection chain make it more difficult for traditional anti-virus software to detect attacks, and the malware may easily be downloaded onto target machine, given that it is now obfuscated and saved in two separate files. 

“Detection that is focused on seeing the full transfer of the malicious executable would likely miss this updated version of Qakbot. Because of this update to persistence mechanisms, the transfer of the malicious Qbot binary will be obfuscated to the point that some security products could miss it,” Talos concludes. 

RelatedQakbot, Emotet Increasingly Targeting Business Users: Microsoft

RelatedQbot Infects Thousands in New Campaign

Copyright 2010 Respective Author at Infosec Island]]>
Flaws in D-Link Cloud Camera Expose Video Streams Mon, 06 May 2019 12:09:02 -0500 Vulnerabilities in the D-Link DCS-2132L cloud camera can be exploited by attackers to tap into video or audio streams, but could also potentially provide full access to the device. 

The main issue with the camera is the fact that no encryption is used when transmitting the video stream. Specifically, both the connection between the camera and the cloud and that between the cloud and the viewing application are unencrypted, thus potentially exposed to man-in-the-middle (MitM) attacks.

The viewer app and the camera communicate through a proxy server on port 2048, using a TCP tunnel based on a custom D-Link tunneling protocol, but only parts of the traffic are encrypted, ESET’s security researchers have discovered. 

In fact, sensitive details such as the requests for camera IP and MAC addresses, version information, video and audio streams, and extensive camera info are left exposed to attackers. The vulnerability resides in the request.c file, which handles HTTP requests to the camera. 

“All HTTP requests from are elevated to the admin level, granting a potential attacker full access to the device,” ESET notes.

An attacker able to intercept the network traffic between the viewer app and the cloud or between the cloud and the camera can see the HTTP requests for the video and audio packets. This allows the attacker to reconstruct and replay the stream at any time, or obtain the current audio or video stream. 

ESET’s security researchers say they were able to obtain the streamed video content in two raw formats. 

Another major issue was found in the “mydlink services” web browser plug-in, which allows users to view video streams. The plug-in manages the creation of the TCP tunnel and the video playback, but is also responsible for forwarding requests for the video and audio data streams through a tunnel. 

The tunnel is available for the entire operating system, meaning that any application or user on the computer can access the camera’s web interface by a simple request (only during the live video streaming).

“No authorization is needed since the HTTP requests to the camera’s webserver are automatically elevated to admin level when accessing it from a localhost IP (viewer app’s localhost is tunneled to camera localhost),” the researchers explain. 

While D-Link has addressed issues with the plug-in, there are still a series of vulnerabilities in the custom D-Link tunneling protocol that provide an attacker with the possibility to replace the legitimate firmware on the device with a maliciously modified one. For that, they would need to replace the video stream GET request with a specific POST request to fetch a bogus firmware update.

The attack, ESET notes, is not trivial to perform and requires dividing the firmware file into blocks with specific headers and of a certain maximum length. However, because the authenticity of the firmware binary is not verified, an attacker could upload one containing cryptocurrency miners, backdoors, spying software, botnets or other Trojans, or they could deliberately “brick” the device.

Other issues the researchers discovered include the fact that D-Link DCS-2132L can set port forwarding to itself on a home router, via the Universal Plug and Play (UPnP) protocol. Thus, it exposes its HTTP interface on port 80 to the Internet without the user even knowing about it. The issue can be mitigated by disabling UPnP. 

“Why the camera uses such a hazardous setting is unclear. Currently close to 1,600 D-Link DCS-2132L cameras with exposed port 80 can be found via Shodan, most of them in the United States, Russia and Australia,” the researchers say. 

ESET says it reported the issues to D-Link in August 2018, including vulnerable unencrypted cloud communication, insufficient cloud message authentication and unencrypted LAN communication, but that only some of the flaws have been mitigated, such as the “mydlink services” plug-in, which is now properly secured. The most recent firmware available for the device is dated November 2016. 

“D-Link DCS-2132L camera is still available on the market. Current owners of the device are advised to check that port 80 isn’t exposed to the public internet and reconsider the use of remote access if the camera is monitoring highly sensitive areas of their household or company,” ESET concludes. 

Related: Critical Vulnerabilities Allow Takeover of D-Link Routers

Related: D-Link Patches Code Execution, XSS Flaws in Management Tool

Copyright 2010 Respective Author at Infosec Island]]>
SOAR: Doing More with Less Fri, 26 Apr 2019 04:29:01 -0500 Security orchestration, automation and response model has many benefits, including some that are unintended

Security teams in every industry and vertical are facing a common set of challenges. Namely, defending against an endless stream of cyberattacks, having too many security tools to manage, dealing with overwhelming workloads, and having a shortage of skilled security analysts. Most enterprises try to solve these challenges the old-fashioned way — by adding more tools and hoping they deliver on their promises.

Progressive enterprises are adopting a new approach, called Security Orchestration, Automation and Respons (SOAR) that focuses on making existing technologies work together to align and automate processes. SOAR also frees security teams to focus on mitigating active threats instead of wasting time investigating false positives, and performing routine tasks manually.

What is SOAR?

SOAR enables security operations centers (SOCs), computer security incident response teams (CSIRTs) and managed security service providers (MSSPs) to work faster and more efficiently.

Security Orchestration connects disparate security systems as well as complex workflows into a single entity, for enhanced visibility and to automate response actions. Orchestration can be accomplished between security tools via integration using APIs to coordinate data alert streams into workflows.

Automation, meanwhile, executes multiple processes or workflows without the need for human intervention. It can drastically reduce the time it takes to execute operational workflows, and enables the creation of repeatable processes and tasks.

Instead of performing repetitive, low level manual actions, security analysts can concentrate on investigating verified threats that require human analysis.

Some SOAR approaches even use machine learning to recommend actions based on the responses used in previous incidents.

Three elements make up a successful SOAR implementation:

Collaboration - is essential for creating efficient communication flows and knowledge transfer across security teams.

Incident Management  - ideally, a single platform will process all inputs from security tools providing decision-makers with full visibility into the incident management process.

Dashboards and Reporting - provide a comprehensive view of an enterprise’s security infrastructure as well as detailed information for any incident, event, or case.

Implementing SOAR

One of the primary benefits of SOAR is its flexibility. It can be used to unify operations across an enterprise’s entire security ecosystem, or as a vertical solution integrated within an existing product.

For example, one of the most popular product categories for this kind of vertical implementation is Security Information and Event Management (SIEM). Primarily because SOAR within a SIEM can have broad applicability across a wide range of processes. In contrast, when SOAR is implemented within other product areas, such as Threat Intelligence, it tends to have a more limited scope.

Initially, SOAR was designed for use by SOCs. However, as the approach matured and proved its benefits, other groups have adopted it including managed security services providers (MSSP) and computer security incident response teams (CSIRT). More recently, financial fraud and physical security team have also turned to SOAR.

Top Five SOAR Benefits

Arguably, the most powerful benefit of SOAR is its ability to integrate with just about any security process or tool already in use — and to enhance the performance and usefulness of each. Tight integration improves the efficiency of security teams to detect and remediate threats and attacks. It provides a single ‘pane of glass’ into asset databases, helpdesk systems, configuration management systems, and other IT management tools.

SOAR arms security teams with the ability and intelligence to react faster and more decisively to a threat or attack by unifying information from multiple tools and creating a single version of the truth.

Security teams waste an inordinate amount of time and energy dealing with false positives, since there are so many of them generated each day. SOAR automates the triage and assessment of low-level alerts, freeing staff to focus their attention where it is really needed.

Security staff spend way too much time on menial tasks such as updating firewall rules, adding new users to the network, and removing those who have left the company. SOAR virtually eliminates such time-consuming, repetitive functions.

Although cutting costs is rarely a driving factor for adopting SOAR, it often delivers this additional benefit by improving efficiencies and staff productivity.

Unifying and making existing security tools work together, rather than in silos, delivers greater visibility into threats. Implementing an SOAR model can provide the glue to make this security intelligence actionable using repeatable processes for faster incident response that does not require adding more resources.

About the Author: Michele Zambelli has more than 15 years of experience in security auditing, forensics investigations and incident response. He is CTO at DFLabs, where he is responsible for the long-term technology vision of its security orchestration, automation and response platform, managing R&D and coordinating worldwide teams.

Copyright 2010 Respective Author at Infosec Island]]>
Gaining Control of Security and Privacy to Protect IoT Data Wed, 24 Apr 2019 05:50:29 -0500 Internet traffic growth is unrelenting and will continue to expand exponentially, in large part, due to Internet of Things (IoT). The amount of data being generated is staggering, with 5 quintillion bytesof data produced and transmitted over the Internet, daily.  

Virtually every industry is going to be impacted by IoT. The vast amounts of data that devices, apps and sensors create, collect and consume are a real challenge for individuals and companies, throughout the world. This explosive growth of IoT-driven traffic is expanding the attack surface on our networks, putting business and user data at great risk.

Within our increasingly connected world, IoT gathers insightful data from almost every facet of our lives. These devices offer valuable intel about our social habits, public and personal transportation, healthcare facilities, manufacturing assembly processes, industrial plant operations, and even our personal health, sleep and exercise regimens.  

Can you imagine the consequences IoT device manufacturers and healthcare providers would face if sensitive patient health data was mishandled and exploited by hackers? Or if a design flaw in a modern car’s network access control system couldn’t be remotely patched, and hackers took over the vehicle's gas pedal, brakes and steering? If we don’t get a handle on the security issues for smart products, tomorrow’s news headlines will eliminate your need to imagine.

I remember a children’s song called “Dem Bones.” It went something like, “The toe bone's connected to the foot bone, the foot bone's connected to the ankle bone, the ankle bone's connected to the leg bone, now shake dem skeleton bones!”

Here’s a different take on that song. “The watch app is connected to the voice app, the voice app is connected to the car app, the car app is connected to the geofencing app - and that’s how the data gets around!”

While data access is great for helping us gain useful insights in all manner of life and business, it also poses a great threat, when in the hands of those who use it for ill-gotten gain.

IoT Data Should Be Private and Controlled

Data is being created, collected and consumed by IoT, everywhere. Yet, most consumers and companies don’t know if, or how, their data is being used. Many companies are monetizing our personal data, without our knowledge, and reaping billions of dollars. Yet, we continue to just give it away. Other companies are sharing this data within their ecosystem, to “enhance” the value of their products or services. Depending on the product or service, this information sharing can be of potential benefit for consumers, or a possible detriment.

So, what are we as individuals, and as a society to do? How do we discover who has access to our data, and how it is being used? Are we okay with this? After all, what we don’t know can’t hurt us, right?  Perhaps, we can start by becoming more aware and asking some of these questions:

What can we do to protect our data, and keep it confidential? How can we be assured that companies are acting responsibly with our data? Who is responsible for data protection? If we had a choice, how and what kind of information would we want shared with us? How can we gain greater transparency over how our data is used? Are we comfortable living with smart home devices that may listen in on private conversations we have at home?

The other day, my wife and I had a conversation at home about buying new shoes for our daughters. While having this conversation, we were surrounded by smart devices - Alexa, Ring, Nest and a multitude of smart phones and tablets. The next morning, I woke up and the first image in my Instagram feed was for toddler girl shoes. Is this a coincidence or targeted marketing? I haven’t figured out which device it was that captured our conversation, but I’m certain one of the smart devices is monetizing on the data it collects from private conversations going on in our home.  

That story provides a real-life example of how companies may be monetizing on data they collect from IoT devices. As devices proliferate into society, it’s important for consumers to be aware that data is being captured, and the importance of knowing how and when it is being captured. Manufacturers need to be more transparent about these practices so consumers have the right to opt-in or out of data collection on such a private and intrusive level.

You Can’t Have Trust Without Transparency

Many of the answers to the privacy questions mentioned above are not going to be solved with technology alone. We must gain greater insights and control into the way our data is used. Companies must self-regulate, and if they don’t, there should be regulatory and legislative actions required.

Many IoT manufacturers have direct control over their ecosystem, while others have more open systems and hub platforms that are more difficult to control, and specifically, to control how data is collected, stored, and ultimately used. Most companies fall short in communicating their data-handling policies to consumers.

We want these amazing devices in our homes, cars, offices, and bodies, but we don’t necessarily want the companies, or worse, hackers, misusing our information. It’s a catch 22. There are no easy answers or solutions, however as a society, we must feel the urgency to address this growing problem. Consumers need to be aware, while manufacturers need to be responsible. 

I think transparency is key to solving this problem. Companies must adopt a more transparent use of customer data, that will in turn, build customer trust. Transparency will help us know what data is being tracked, how it is being tracked, and how is it being monetized or shared. In the near future, we will have systems that provide data visibility to consumers. Perhaps a privacy portal with authentication mechanisms, where consumers can have autonomy, and even the ability to monetize their own data, by revenue sharing with companies.

Not only will this give consumers control over their data, it will also help companies build greater loyalty and brand equity, when they show consistent data stewardship.

Protecting IoT Data in Transit

In addition to a higher level of transparency, manufacturers need to protect the sensitive data collected. Data encryption is a best practice for confidentiality, and should be used by all IoT manufacturers when transmitting data.  Making sure all connections to an IoT device are properly authenticated, and that access controls are in place, helps keep bad actors out of the device’s ecosystem. If IoT is going to continue to grow in the future, we must have confidence in the security and privacy of our data. I believe all IoT devices that collect personal data, or sensitive business information, should always use encryption.

Controlling access to encryption keys is accomplished through authentication. User authentication uses username and password combinations, biometrics, tokens and other techniques. Server authentication uses certificates to identify trusted third-parties. Authentication allows a company to determine if a user or entity is who they say they are. It then verifies if, and how, they can access a system, including the ability to decipher encrypted data. Without question, multi-factor authentication is always the most secure form of protection for users.

While encryption and authentication protect data, they can’t prevent unauthorized access to a network. As a result, in addition to protecting access through authentication, authorization is used to control who sees sensitive data, and what they can do with it.

Authorization allows IT to restrict activity within their resources, applications and data, by giving specific access rights to individuals and groups. Privileges are defined, and the level of access is granted to individuals or groups.

Updating software on IoT devices isn’t always possible, and many devices don't have a secure method of ensuring the authenticity or integrity of software updates. This is a dangerous practice, as it enables hackers to introduce malware into devices. Code signing is an effective solution, that requires proof of the origin and integrity of executable software, by using a private signing key to create a digital signature of a hash of the file. Code signing is an effective way of protecting IoT device manufacturers, the businesses that deploy the devices, and the consumers of the devices, from the dangers posed by unauthorized software.

Consistency, security and trust have always been requirements for ensuring lasting customer relationships, and the digital age is no different. It’s a matter of who is in control of our data. Today, IoT device manufacturers and businesses are in control. In the future, we must be in control of our own information.

About the author: Mike Nelson is the VP of IoT Security at DigiCert, a global leader in digital security. He oversees the company’s strategic market development for the various critical infrastructure industries securing highly sensitive networks and Internet of Things (IoT) devices, including healthcare, transportation, industrial operations, and smart grid and smart city implementations.

Copyright 2010 Respective Author at Infosec Island]]>
Growing Reliance on Digital Connectivity Amplifies Existing Risks, Creates New Ones Wed, 24 Apr 2019 05:03:50 -0500 Information security threats are intensifying every day. Organizations risk becoming disoriented and losing their way in a maze of uncertainty, as they grapple with complex technology, data proliferation, increased regulation, and a debilitating skills shortage.

By 2021 the world will be heavily digitized. Technology will enable innovative digital business models and society will be critically dependent on technology to function. This new hyperconnected digital era will create an impression of stability, security and reliability. However, it will prove to be an illusion that is shattered by new vulnerabilities, relentless attacks and disruptive cyber threats.

The race to develop the next generation of super-intelligent machines will be in full swing and technology will be intertwined with everyday life. Coupled with heightened global mistrust and rising geopolitical tensions, this will lead to a cyber threat that is relentless, targeted and disruptive. The operating environment for business will become increasingly volatile.

At the Information Security Forum, we recently highlighted the top three threats to information security emerging over the next two years, as determined by our research. Let’s take a quick look at these threats and what they mean for your organization:


Vast webs of intelligent devices, combined with increased speeds, automation and digitization will create possibilities for businesses and consumers that were previously out of reach. The Internet of Things (IoT) will continue to develop at an astonishing rate, with sensors and cameras embedded into a range of devices across critical infrastructure. The resulting nexus of complex digital connectivity will prove to be a weakness as modern life becomes entirely dependent on connected technologies, amplifying existing dangers and creating new ones.

5G Technologies Broaden Attack Surfaces:The emergence of the fifth generation of mobile networks and technologies (5G) will provide a game-changing platform for businesses and consumers alike. Colossal speeds, minimal latency and a range of newly available radio frequencies will connect previously unconnected devices, accelerate processes and change entire operating models – but with these changes comes a broader attack surface, as millions of telecommunication masts are built with varying levels of security. As organizations become heavily reliant on 5G to operate, new attack vectors will exploit weaknesses in this emerging technology.

Manipulated Machine Learning Sows Confusion: Machine learning, and neural networks in particular, will underpin processes such as image recognition, pricing analysis and logistics planning. As businesses become reliant upon machine learning and humans are taken out of the knowledge loop, machine learning will become a prime target for attackers. Confusion, obfuscation, and deception will be used by attackers to manipulate these systems, either for financial gain or to cause as much damage and disruption as possible.

Parasitic Malware Feasts on Critical Infrastructure: Parasitic malware is a particular strain of malware designed to steal processing power, traditionally from computers and mobile devices. However, attackers will turn their attention to the vast interconnectivity and power consumption of Industrial Control Systems (ICS), IoT devices and other critical infrastructure, which offer an enticing environment for this malware to thrive. All organizations will be threatened as this form of malware sucks the life out of systems, degrading performance and potentially shutting down critical services.


By 2021 a digital cold war will unfold, causing significant damage to business. The race to develop strategically important, next generation technologies will provoke a period of intense nation state-backed espionage – intellectual property (IP) will be targeted as the battle for economic and military dominance rages on. Cloud services will become a prime target for sabotage by those seeking to cause disruption to society and business. Drones will become both the weapon and target of choice as attackers turn their attention skywards.

State-Backed Espionage Targets Next Gen Tech:A new wave of nation state-backed espionage will hit businesses as the battle for technological and economic supremacy intensifies. The target: the next generation of technology. History teaches us that at times of great technological change targeted industrial espionage follows. Organizations developing technologies such as Artificial Intelligence (AI), 5G, robotics and quantum computing, will find their IP systematically targeted by nation state-backed actors.

Sabotaged Cloud Services Freeze Operations:Popular cloud providers will have further consolidated their market share – organizations will be heavily, if not totally, dependent on cloud providers to operate. Attackers will aim to sabotage cloud services causing disruption to Critical National Infrastructure (CNI), crippling supply chains and compromising vast quantities of data. Organizations and supply chains that are reliant on cloud services will become collateral damage when cloud services go down for extended periods of time.

Drones Become Both Predator and Prey: Drones will become predators controlled by malicious actors to carry out more targeted attacks on business. Developments in drone technologies, combined with the relaxation of aviation regulations, will amplify attackers’ capabilities as the threat landscape takes to the skies. Conversely, drones used for commercial benefit will be preyed upon, hijacked and spoofed, with organizations facing disruption and loss of sensitive data.


Competing in the digital marketplace will become increasingly difficult, as businesses develop new strategies which challenge existing regulatory frameworks and social norms, enabling threats to grow in speed and precision. Vulnerabilities in software and applications will be frequently disclosed online with ever-decreasing time to fix them. Organizations will struggle when one or more of the big tech giants are broken up, plunging those reliant on their products and services into disarray. Organizations will rush to undertake overly ambitious digital transformations in a bid to stay relevant, leaving them less resilient and more vulnerable than ever.

Digital Vigilantes Weaponize Vulnerability Disclosure: Ethical vulnerability disclosure will descend into digital vigilantism. Attackers will weaponize vulnerability disclosure to undercut organizations, destroy corporate reputations or even manipulate stock prices. Organizations will find their resources drained as digital vigilantes reduce the timelines to fix vulnerabilities and apply patches, seriously threatening operations, reputations and endangering customers.

Big Tech Break Up Fractures Business Models: Calls for the breakup of big technology giants will reach their peak by 2021. By then, at least one of them will be broken up, significantly disrupting the availability of the products and services they provide to dependent organizations. From email to search engines, advertising, logistics and delivery, the entire operating environment will change. Malicious actors will also prey upon vulnerable, transitioning organizations.

Rushed Digital Transformations Destroy Trust: The demand for organizations to remain relevant in a technology-centric world will drive them to undertake rushed digital transformations. Organizations will deploy technologies such as blockchain, Artificial Intelligence (AI) and robotics, expecting them to seamlessly integrate with ageing systems. Organizations will be faced with significant disruption to services, as well as compromised data when digital transformations go wrong.

Preparation Must Begin Now

Information security professionals are facing increasingly complex threats—some new, others familiar but evolving. Their primary challenge remains unchanged; to help their organizations navigate mazes of uncertainty where, at any moment, they could turn a corner and encounter information security threats that inflict severe business impact.

In the face of mounting global threats, organization must make methodical and extensive commitments to ensure that practical plans are in place to adapt to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.

The three themes listed above could impact businesses operating in cyberspace at break-neck speeds, particularly as the use of the Internet and connected devices spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large, even if they seem distant. The future arrives suddenly, especially when you aren’t prepared.

Copyright 2010 Respective Author at Infosec Island]]>
How Microsegmentation Helps to Keep Your Network Security Watertight Wed, 24 Apr 2019 04:57:44 -0500 A submarine operates in hazardous conditions: in the ocean depths, even a small breach of its hull could spell disaster for the vessel and its crew. That’s why submarine designers don’t just rely on the strength of the outer skin for protection. The interior is segmented into multiple watertight compartments, with each capable of being closed off in the event of an emergency so that the rest of the boat can continue to function. 

The same logic has been applied to enterprise networks for several years now.  Segmentation has been a recommended strategy for shrinking enterprise attack surfaces, with a lack of it being cited as a contributing factor in some of the biggest-ever data breaches. A lack of segmentation also contributed to the $40M disruption experienced by manufacturer Norsk Hydro in March this year, when multiple IT and operational systems were hit by ransomware that moved laterally across its networks.

But while segmentation is recognized as an effective method for enhancing security, it can also add significant complexity and cost – especially in traditional on-premise networks and data centers. In these, creating internal zones usually means installing extra firewalls, cabling and so on to police the traffic flows between zones. This is complex to manage when done manually.

However, the move to virtualized data centers using software-defined networking (SDN) changes this. SDN’s flexibility enables more advanced, granular zoning, allowing networks to be divided into hundreds of microsegments, delivering a level of security that would be prohibitively expensive and complicated to implement in a traditional data center. As such, research by analyst ESG has shown that nearly 70% of enterprises are already using some form of micro-segmentation to limit hackers’ ability to move laterally on networks, and make it easier to protect applications and data.

Even though SDN makes segmentation far easier to achieve, implementing an effective micro-segmentation strategy presents security teams with two key challenges. First, where should the borders be placed between the microsegments in the network or data center for optimum protection against malware and hackers? Second, how should the teams devise and manage the security policies for each of the network segments, to ensure that legitimate business application traffic flows are not inadvertently blocked and broken by the micro-segmentation scheme?

A process of discovery

To start devising a micro-segmentation scheme for an existing network or datacenter, you need to discover and identify all the application flows within it. This can be done using a discovery engine which identifies and groups together those flows which have a logical connection to each other and are likely to support the same business application.

The information from the discovery engine can be augmented with additional data, such as labels for device names or application names that are relevant to the flows. When compiled, this creates a complete map identifying the flows, servers and security devices that your critical business applications rely on.

Using this map, you can start to draw up your segmentation scheme by deciding which servers and systems should go into each segment: A good way to do this is by identifying and grouping together servers that support the same business intent or applications. These will typically share similar data flows, and so can be placed in the same segment.

Once the scheme is outlined, you can then choose the best places on the network to place the security controls to enforce the borders between segments. To do this, you need to establish exactly what will happen to your business application flows when those filters are introduced.

Remember that when you place a physical or virtual filtering device to create a segment border, some application traffic flows will need to cross that border. These flows will need explicit policy rules to allow them, otherwise the flows will be blocked and the applications that rely on them will fail.

Crossing the borders

To find out if you need to add or change specific policy rules, examine the application flows that you identified in your initial discovery process – and make a careful note about any flows whichalready pass through an existing security control. If a given application flow does not currently pass through any security control, and you plan to create a new network segment, you need to know if the unfiltered flow might get blocked when that segment border is established. If it does get blocked, you will need to add an explicit new policy rule that allows the application flow to cross it.

Micro-segmentation management

Having devised and implemented your micro-segmentation scheme, you will need to manage and maintain it, and ensure it works in harmony with the security across your entire enterprise network. The most effective way to achieve this is with a network security automation solution that can holistically manage all the security controls in your SDN environment alongside your existing traditional on-premise firewalls.

Automation ensures that the security policies which underpin your segmentation strategy are consistently applied and managed across your entire network estate, together with centralized monitoring and audit reporting. Any changes that you want to make to the segmentation scheme can be assessed and risk-checked beforehand to ensure that applications will continue to work, and no connectivity is affected. Then, if the changes do not introduce any risk, they can be made automatically, with zero-touch, and automatically recorded for audit purposes. This streamlines the management process, and avoids the need for cumbersome, error-prone manual processes every time you need to make a network change.

To conclude, building and implementing a micro-segmentation strategy requires careful planning and orchestration to ensure it is effective. And automation is critical to success, as it eliminates time-consuming, complex and risky manual security processes. But when done right, micro-segmentation helps to ensure that your networks offer watertight security, and stops a small breach turning into a disaster that could sink your business.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Through the Executive Lens: Prioritizing Application Security Vulnerabilities Thu, 28 Mar 2019 05:29:54 -0500 It’s an old axiom in the security business that your security is only as good as your weakest link. Today, as the number of security threats and attack vectors continues to grow, so too does the number of tools security teams have at their disposal to find and block them. Also growing is the pile of data that security teams must sift through to identify where their systems might be vulnerable. Given all the data, how do you prioritize your efforts?

First, a couple of statistics. According to Tim Clark, SAP contributor to Forbes, 84 percent of all cyber-attacks are happening on the application layer. The 2018 Verizon Data Breach Investigations Report (DBIR) states that web application attacks were responsible for 38 percent of data breaches. And an IBM white paper states that “the costs of discovering defects after release are significant: up to 30 times more than if you catch them in the design and architecture phase.” Conclusion: Start by focusing on your application security initiatives.

Within the AppSec space, the variety of vulnerability analysis tools fall into two broad groups: tools that analyze your source code and tools that do dynamic analysis. Each tests for a different type of vulnerability, so a portfolio approach to using them will give you the most comprehensive results—and the most data to sift. You can narrow your focus and prioritize issues in a number of ways.

IDE tools

Use source code scanning tools that integrate with the tools your developers use every day, like their integrated development environment (IDE). Some static analysis tools have IDE plug-ins that let your developers do vulnerability analysis directly in the IDE.

This approach to “shifting security left” in the software development life cycle (SDLC) has several benefits. One is that it distributes the load of looking at vulnerabilities across the entire development organization and makes the team more aware of developing secure code as part of their daily job. Second, it reduces the total number of security issues that make it into the code to be scanned at CI/CD build time.

Whichever tool you pick, be sure that the developer scans use the same engines as the central scans. Otherwise, correlating results across the two scan types won’t work well. And if that plug-in supports multiple analysis types, so much the better.

False-positive rate

Choose vulnerability scanning tools with low false-positive rates. Not only do false positives increase the volume of data to sift through, but too many false positives in a developer’s queue breeds malaise and disinterest in fixing them.

Developer training and measurement

Add security training to your developers’ personal development goals, and measure security issues as part of their MBOs. Learning about common vulnerability types, such as cross-site scripting, will make the team more efficient. Adding metrics around software security as part of a team’s MBOs will ensure that developers treat security on par with quality and feature delivery. Nothing changes behavior more than a combination of incentives and measurement by one’s boss.

Risk correlation

This one is harder than you might think. Several tools let you aggregate the results from different tools into one view showing the risk profile of a given app based on those results. The challenge is in correlating data that comes from different tools, each with its own categorizing methodology. Ideally, you’d have a tool that normalizes the results across tools and lets you filter issues based on things like security category and industry standards, such as the OWASP Top 10 or CWE categories.

A few tools offer other features, such as showing open/closed issues over time so you can see progress, and the ability to filter results from one tool by the results of another. For example, if your static analysis tool says you’ve got 1,000 issues, but your open source scanning tool reports that 800 of those are in open source components, your developers can focus on fixing the 200 that you know are uniquely in your source code.

Summing it up

The work of the security team is never done, but by focusing on specific AppSec initiatives and applying some well-tested strategies and tools, you can do a lot to prioritize the most important issues to focus on.

About the author: Neal Goldman is Senior Product Manager at Synopsys, with over 25 years of product management, marketing, and business development experience at a variety of technology vendors.


Copyright 2010 Respective Author at Infosec Island]]>
Next Generation Firewalls are Old News in the Cloud Wed, 27 Mar 2019 11:47:15 -0500 Having been in the security field for many years, long enough that I’ve seen the firewall be replaced with the “Next Generation Firewall.” What was special about this change was that it signaled a big milestone as we went from a model that focused on IP addresses to one that targeted applications, users and content. This major shift provided a lot more visibility and context on what was being protected.

As you move to the cloud, the “Next Generation Firewall” is no longer “Next Generation” but looks like an antique “Grandfather’s Generation,” which will inevitably take on the same fate as, say, the dinosaurs.  In the case of the Next Generation Firewall, application visibility provides the ability to do deep packet inspection to identify and inspect applications. The challenge is that in the cloud most traffic is encrypted which means the network has no ability to inspect it.  Even if by some miracle you are able to perform a “Man in the Middle” attack to decrypt the data, the scale and elasticity of the cloud would make the current Next-Generation Firewalls useless.

Next Generation Firewall Can’t Keep Up in the Cloud

Applications in an IaaS environment are custom-written so there are no known signatures to identify the app.  Even if you are able to identify the application, its security profile can be different based on how it’s used. The security profile and behavior of these two database apps is completely different when it comes to communication patterns but from a launch perspective, they are the same application.  Next Generation Firewall is not able to distinguish between the launch and communication patterns to understand the application behavior or required policy.

Containers, Kubernetes, and serverless computing also make Next Generation Firewalls completely blind as they were never built to understand these new generations of microservices.  

IaaS has actually become a PaaS and any application which is in the cloud is surely using a lot of native service offerings from cloud providers. All the activity accessing these native cloud services never cross the network so the Next Generation Firewall has no visibility to this critical piece of an app.

User Identification in the Cloud

The Next Generation Firewall also makes user identification more difficult in the cloud as the same user might have different permissions on the same application in different environments. In other words, production versus development environments changes how users interact. Next Generation Firewalls have no context for deployment models as they were built before the CI/CD concept.

The majority of activity in the cloud is not really by users but is done by machines or applications assuming roles to accomplish various tasks. The Next Generation Firewall is completely blind to these users as they accomplish tasks using APIs which never shows up in network traffic.

In the cloud, the other challenge is that users use service accounts or SUDO to do the work which means you cannot attribute activity to the right user by just looking at network traffic or Active Directory as the effective user is not necessarily the original user doing all the work.

Enforcement Rules in the Cloud

The enforcement function is one of the main capabilities of the firewall but in the cloud, service providers now offer their own ability to set the firewall policies, e.g. security groups in AWS, for example, which provides more control and is built from the ground up to support elasticity and tags which provide finer control. The Next Generation firewalls struggle with elasticity and have no context on machine tags.

The Next Generation firewalls were built using static rules which even in a static environment were impossible to maintain. In every firewall configuration I have come across there are at least 10 rules which no one can explain why they exist, but everyone is scared to touch them as they do not know what it will break. In an elastic environment like the cloud, building and maintaining rules is an impossible task.

New Data Set will be needed in the Cloud

To identify the apps and users in the cloud you need a new set of data which does not exist in network traffic and rules/signatures cannot be used as you need to use behavior and context to do application and user attribution.

Here is the list of applications, users and behaviors which are significant in the cloud, along with a comparison between a “Next Generation Firewall” and a solution natively built for cloud.

Application Visibility     Next Generation Firewall    Solution Built for Cloud

Custom Apps                   No Visibility                            App identification uses behavior

                                                                                       and context

Containers                       No Visibility                            Supported

Kubernetes                      No Visibility                            Supported

Cloud Services                No Visibility                            Supported

Encrypted Traffic             No Visibility                            At host, so able to identify the

                                                                                       app and user

Intra-VM Traffic               No Visibility                            All traffic on the host is also visible

Serverless                       No Visibility                            Supported

Machine/Cloud Tags       No Visibility                            Supported


User Visibility        Next Generation Firewall        Solution Built for Cloud

Assumed Roles      No Visibility                                  Supported

SSH Users             No Visibility                                  SSH tracking makes it possible to

                                                                                   attribute activity to right users

Cloud Admins         No Visibility                                 Console activity using account API


Behaviors for Kill Chain   Next Generation Firewall    Solution Built for Cloud

Network Communication     IP address Level                   App/User/Container/Kubernetes

Privilege Changes               No Visibility                           Track users and their privileges

File Changes                       No Visibility                           FIM

User Activity                        No Visibility                           SSH tracking to attribute activity

   to right user

Cloud Config Changes        No Visibility                          Best practices and Compliance

Account API Behavior         No Visibility                          Account based IDS

Application Launches          No Visibility                          Application Launch Tracking

File Malware                        No Visibility                          SHA based malware detection

Users are going to have to change the way they deploy infrastructure to the cloud. As users start to do this, they will also need to find security solutions that are built by using the cloud in order to secure the cloud. The idea of the Next Generation firewall will need to change its name from “Next Generation” to a new moniker such as the “Grandfather’s Generation” to better adapt to new cloud technology.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island]]>
Trojan Horses for the Mind, Part 2 of Building Impactful Security Awareness Messaging Wed, 27 Mar 2019 11:08:00 -0500 In late 2018, I wrote about how we can use Trojan Horses for the mind when it comes to shaping messaging and creating an influential awareness campaigns. In other words, the way we design and deliver our messages can become a Trojan Horse that can sneak past a user’s mental defenses.

Why is this important? Here’s why: the concept of “security awareness” can suffer from a fatal flaw; what I call the knowledge-intention-behavior gap. Just because your people are aware of something doesn’t mean that they will care. And, even if they care and intend to do the right thing, a whole host of situations and contexts can interfere with the follow-through. So, there is a gap between knowledge and intention. And there is a gap between intention and behavior.

We can use our Trojan Horses for the Mind to help close some of those gaps. And we can use them to create messages that people remember and care about.

My last post focused specifically on the emotional side – how people tend to make decisions based on emotion and then build a case for their decision based on logic.  Let’s now talk about another Trojan Horse for the Mind -- visuals.

Think for a moment about all the great companies, products, media sites, and networks that you interact with daily.

When you scrolled through each of those items in your mind, did you see their names as plainly printed words? Probably not. If you are like most people, you saw the logos of the companies; or if you were thinking about a specific product, like McDonalds chicken nuggets, your mind summoned forth a product image.

Images are basically a compression algorithm that the brain easily and readily uses to unzip bundles of data whenever presented with the image. Now consider what a brand and logo really is… it is a simple word, phrase, or symbol that encapsulates the values, products, services, and history of an organization or idea. Icons can serve this purpose as well, as they can pack a complex meaning into a simple picture.

When it comes to building a successful awareness training program, you should always be seeking ways to embed volumes of meaning into simple, instantly digestible, images. For any behavior that you want to train on (password management, tailgating, incident reporting, secure document handling, etc.), consider the fullness of your message. And, as part of your training campaign, create compelling visuals and icons that represent that behavior. They can be photographs capturing the human impact of following (or not following) that behavior, or they can be icons placed at/near the point of behavior as context cues, and so on. The point is that the simple visual acts to ‘unzip’ the broader information bundle within the learner’s mind. That’s powerful!

Repetition is Magic

Remember that Britney Spears song that you hated the first few times you heard it? Then before you know it, the song is on auto-loop in your brain and you find yourself physically grooving to the music the next time you hear it. There’s a reason for that… and that reason emerges in all forms of communication, from the way words are used, to music, as well as imagery.

Here’s the reason: familiarity breeds likability. Cognitive scientists refer to this as the familiarity effect or the mere-exposure effect.

One marker of a mature security awareness program is the seriousness the program leaders place on consistency in the visual and textual components of their communication. These security leaders approach their awareness programs with an entrepreneurial mindset and treat the branding aspects of the program with the same zeal.

Another reason to use repetition in the awareness context is that you are always battling the “decay of knowledge.” Simply stating something once will not likely have a lasting impact. As a result, your once-per-year training marathons are (sorry to say this) next to useless in shaping behavior. Instead, you need to adopt this mindset: If it is worth saying once, it is worth saying multiple times. If it is worth saying once, it is worth saying multiple times. If it is worth saying once, it is worth saying multiple times…

That’s why you remember phrases like, “See something. Say something.”

The Power of Imagery & Color

It’s super important for us to understand and appreciate the power of imagery. To be human is to inherently understand the power of pictures. The moment an image hits our retinas, our mind decodes not only the data in that image, but also assigns any preconditioned emotional response. So, imagery is important if you want to evoke or enhance the emotional impact of your security-related messaging. Simple text-based security awareness messaging will always be less effective than messaging that includes well thought out and designed visual components.

A discussion about images and design wouldn’t be complete without talking about the use of color. Colors serve a much greater purpose than just being pretty. Colors imply meaning, can evoke emotion, and help establish context. While there are some general rules of thumb that you can use when working with color, it’s important to recognize that the intended meaning behind your color choices may not be interpreted the same by everyone in your audience; there are no hard-and-fast rules.

One of the best ways to think about how to use color is to see what already exists that is like what you want to communicate. Let’s say you wanted to build messaging related to how employees can secure their home networks and help their kids make better security decisions. You may have already defined the practices and now you are trying to figure out how to package and promote the information.

If you aren’t an experienced designer (or even if you are), this is where Google can be your friend. You don’t have to understand color theory, have a degree in marketing, or have studied the psychology of color to create something that can be great. Just enter brand names or search terms related to family, kids, childhood, and so on, and look at the image results. In this example, you’d quickly notice that many of the colors commonly associated with family, kids, and childhood are yellow, orange, green, light blue, and sometimes purple. And after seeing these examples, you can piece together plausible reasons why these colors have become the cultural reference point for the scenarios that you want to relate to. Green is typically associated with life and growth. The orange and yellows can be reminiscent of the sepia tones that we associate with memory and nostalgia; and so on. That’s a valuable starting place. In this Googling exercise you may even come across examples of font styles and images that you may want to use in your messaging. This is way better than starting with a blank page and agonizing about how to begin.

I couldn’t begin to cover all the critical areas or pitfalls of design. Here are some suggested books if you want to take a deeper-dive into color, and design principles in general:

  • Slide-o-logy by Nancy Duarte
  • Superpowers of Visual Storytelling by Laura Stanton, David LaGesse
  • Design Elements, Color Fundamentals: A Graphic Style Manual for Understanding How Color Affects Design by Aaris Sherin
  • Presentation Zen by Gary Reynolds
  • The Senses: Design Beyond Vision by Ellen Lupton (Editor)

About the author: Perry Carpenter is the Chief Evangelist and Strategy Officer for KnowBe4, the provider of the world’s most popular integrated new school security awareness training and simulated phishing platform.

Copyright 2010 Respective Author at Infosec Island]]>
Internet-Exposed IBM BigFix Relays May Lead to Full Remote Compromise Thu, 21 Mar 2019 04:50:52 -0500 Internet-facing relays in IBM BigFix deployments could lead to information disclosure and potential full remote compromise if not properly configured, Atredis Partners security researchers have discovered. 

Tracked as CVE-2019-4061 and affecting BigFix Platform versions 9.5 - 9.5.11 and 9.2 - 9.2.16, the vulnerability is found in all deployments where relays that are exposed to the Internet are not configured as authenticating. 

This misconfiguration could allow an unauthenticated, remote attacker to query the relay and gather information about the updates deployed to the associated sites. 

“Internet-facing relays, if any, in a BigFix deployment might be configured as non-authenticating, which exposes the deployment to security risks,” IBM notes in an advisory.

“Security attacks in this context might mean unauthorized access to the relays and any content or actions, and download packages associated with them or to the Relay Diagnostics page that might contain sensitive information (for example: software, vulnerability information, and passwords),” IBM continues. 

According to Atredis Partners’ security researchers, BigFix deployments with external relays that lack authentication expose a very large amount of information to unauthenticated external attackers, and could even lead to full remote compromise. 

Some of the data an attacker could access includes server IP, server name, port numbers, digital signatures, and license information (details found in the masthead BigFix uses to publish info on installations), an index of configured sites, and a list of package names and versions. 

The researchers also note that the BigFix data is still accessible to an attacker with access to the internal network or to an externally connected system with an authenticated agent, even if relay authentication is enabled. 

“The best path to preventing a compromise through BigFix is to not include any sensitive content in uploaded packages,” the researchers note.

An Internet-wide survey Atredis Partners conducted revealed the existence of 1,458 BigFix relay servers with relay authentication disabled. The researchers say they were able to query the masthead and obtain information on each of the discovered relays.

“This list included numerous government organizations, large multinational corporations, health care providers, universities, insurers, major retailers, and financial service providers, along with a healthy number of technology firms,” the researchers reveal. 

After being informed on the vulnerability, the BigFix team updated the documentation and took steps to notify affected customers, a process completed as of March 18, Atredis Partners says. 

IBM recommends addressing the vulnerability by configuring Internet-facing relays in BigFix deployment as “authenticating”. This would allow only BigFix clients in one’s environment to connect to the relay and would also ensure that all communication will take place through TLS (HTTPS). 

“This configuration also prevents any unauthorized access to the Relay Diagnostics page,” IBM notes. 

To enable the relays for authentication, one should head to the BES Support website and find the BES Client Settings: Enable Relay authentication fixlet. Next, they simply need to run the fixlet and wait for the action to finish.

Related: 5 Forecasts to Inform Digital Risk Protection in 2019

Copyright 2010 Respective Author at Infosec Island]]>
1 Million Apps Patched in Android Security Improvement Program Fri, 01 Mar 2019 06:28:21 -0600 Over its five-year lifetime, the Android Application Security Improvement Program helped over 300,000 developers to fix more than 1,000,000 apps on Google Play, Google says.

The program was launched to help the Android ecosystem thrive by helping developers improve the security of their applications and eliminate vulnerabilities from them.

Through this initiative, Google scans all applications submitted to the official storefront to determine if a variety of vulnerabilities are present. Should any issues emerge, the Internet giant then alerts the developer and helps them address the issues. 

This allowed the Internet giant to fix over 1,000,000 apps since the Application Security Improvement Program’s launch. Last year, the program helped over 30,000 developers fix over 75,000 apps, the company says. 

“The downstream effect means that those 75,000 vulnerable apps are not distributed to users with the same security issues present, which we consider a win,” Patrick Mutchler and Meghan Kelly, Android Security & Privacy Team, note in a blog post

The program covers a large variety of problems in Android applications, including vulnerabilities in certain versions of popular libraries, and other issues with broader impact. 

The Internet search giant says it also focuses on improving existing checks and expanding them to cover more classes of security vulnerabilities, to ensure the program evolves to cover emerging exploits. 

Last year, it added warnings for SQL Injection, File-based Cross-Site Scripting, Cross-App Scripting, Leaked Third-Party Credentials, Scheme Hijacking, and JavaScript Interface Injection. 

“Think of it like a routine physical. If there are no problems, the app runs through our normal tests and continues on the process to being published in the Play Store. If there is a problem, however, we provide a diagnosis and next steps to get back to healthy form,” Mutchler and Kelly note. 

Related: 18,000 Android Apps Violate Google's Ad ID Policies: Analysis

Related: Bug in Twitter Android App Exposed Protected Tweets

Copyright 2010 Respective Author at Infosec Island]]>
The Role of Analytics in Protecting Healthcare Data Privacy and Security Wed, 27 Feb 2019 04:08:00 -0600 Healthcare has traditionally had a weaker security profile than most other industries. On the one hand, it is a favorite target for ransomware attacks, and for hackers looking to steal confidential patient records that have a high resale value on the black market. On the other, healthcare experiences more insider attacks than any other industry.  

Recent research reveals that healthcare companies face their biggest threats from malicious insiders that abuse their access privileges to view or exfiltrate personally identifiable information (PII) and protected health information (PHI) data. Verizon’s 2018 Protected Health Information Data Breach Report noted that 58 percent of data breaches in healthcare stem from employees or contractors.    

Clearly, payers and providers are severely challenged to prevent both insider and outsider attacks on patient and corporate data.   

To limit these threats, progressive organizations are using real-time analytics and risk-scoring to automate security controls. This approach monitors the behavior of users and devices, and applies analytics to risk-score them. When anomalies from normal patterns are detected, the risk score increases.   

The Insider Threat Landscape Insider  threats pose the biggest challenges to healthcare organizations because they can happen without triggering any security alarms.   

A trusted employee can steal confidential patient and corporate information, or tamper with it, and even sabotage systems. While many insider attacks are carried out by disgruntled employees, some can be unintended or simply human error. For example, an employee might mistakenly send confidential information to another employee or to an outsider, or give network access to someone who should not have it.   

In some cases, outsiders use social engineering to trick employees into giving up their account credentials. Such ploys include a spoofed email, phishing scheme or a “call from IT” seeking a person’s ID and password.  

Top Insider Violations Some of the most common insider threat incidents in healthcare include:  

  • Snooping on the medical records of friends, family, neighbors, and celebrities
  • Sending sensitive data to personal accounts, competitors, or bad actors
  • Printing, downloading and exporting patient records and reports  

Most of these activities can be partially addressed by monitoring activity logs from Electronic Medical Records (EMR) Systems such as Allscripts, Cerner, and Epic and from security tools including firewalls, VPNs, etc. However, manual monitoring is incapable of identifying and remediating threats in real-time. This is where data analytics come into play.  

Security analytics powered by machine learning enables healthcare organizations to analyze large volumes of data in real time and to predict anomalous behaviors. Machine learning uses historical data to create behavior baselines for users, devices, and other entities.   

These baselines, which are used to identify deviations from normal patterns, are self-adjusting and change as the user and entity behaviors change. Such capabilities can be used not just to monitor behaviors, but to assign risk scores to individual users and devices — resulting in highly accurate information that singles out potentially risky activity in real time.   

Analytics and risk scoring facilitate the automation and orchestration of security decisions. Sometimes called model-driven security, this approach can respond to threats with the speed and accuracy of a machine by enforcing new controls when activity exceeds pre-determined risk thresholds.   

Real-time Detection and Prevention of Insider Threats As a real-time security control, model-driven security collects all enterprise intelligence data that can be correlated back to a single user identity such as proxy logs, entitlements, actions taken using those entitlements, and basically anything they can bring back into a data warehouse. Then, behavioral models are applied to the data to develop a risk score for users within the company.   

Risk scores are like credit scores. The same way a credit score goes up and down depending on money owed and payment history, a user’s risk score goes fluctuates depending on the actions taken while using their access permissions. The risk score is adjusted dynamically, based on a user’s behavior.  

In this way, an insider’s risk score can serve as a dynamic security control. If the score is high, the organization can block the user’s account. Or, if it’s medium-risk, the user can be prompted to call in to the help desk to verify his or her identity. This has been historically impossible to do without the ability to risk score users dynamically. When a user’s risk score increases in a short amount of time, or exceeds a threshold, the organization can send out an alert, lock an IP address, restrict all traffic via DLP, open a security incident, etc.   

Risk-scoring using analytics enables healthcare organizations to predict, detect and prevent insider threats, in ways that are impossible using static rules. It reduces much of the friction imposed by conventional security mechanisms, while providing continuous risk monitoring and real-time intervention when and where warranted.   

About the author: Nilesh Dherange is CTO of security and fraud analytics vendor Gurucul, and an expert on identity, data science and machine learning. Nilesh was an integral member of identity technology vendor Vaau, which was acquired by Sun Microsystems. He also co-founded BON Marketing Group and created BON Ticker — a predictive analytics tool for online advertising.   

Copyright 2010 Respective Author at Infosec Island]]>
WINDSHIFT Hackers Target Government Agency in the Middle East Tue, 26 Feb 2019 10:02:28 -0600 A recently discovered threat actor was observed targeting a Middle Eastern government agency on several occasions over the course of last year, Palo Alto Networks security researchers reveal. 

Referred to as WINDSHIFT, the surveillance-focused threat actor is believed to have remained unnoticed for a long time, and to have hacked other actors to re-use their malware, which helped it stay unnoticed. 

In a report from last year (PDF), Dark Matter said WINDSHIFT was observed launching sophisticated and unpredictable spear-phishing attacks against specific individuals and rarely targeting corporate environments.

The group’s Tactics, Techniques and Procedures (TTPs) were said to resemble those of Bahamut, a threat actor security researchers also linked to Urpage last year

Following a long recon period, which could take several years, the group would attempt to steal the victim’s credentials by sending fake emails prompting the victim to reset their password for Gmail , Apple iCloud, Etisalat (main ISP in UAE), or professional emails. 

Should the credential harvesting fail, the actor then attempts to infect the victim with malware, also via email. The actor would then attempt to erase all traces of the attacks by shifting to a new infrastructure, gaining access to new malware, and shutting down malicious domains. 

The cyber-espionage group is known to be using macOS-targeting malware, namely WINDTAIL backdoor for file exfiltration, WINDTAPE backdoor for taking screenshots, and WINDTAIL downloader for WINDTAPE. The group is also believed to be using WINDDROP, a Windows-targeting downloader. 

Now, Palo Alto Networks saysit has observed WINDSHIFTattacks unfolded at a Middle Eastern government agency between January and May of 2018. 

In early January 2018, an initial attack featuring a WINDTAIL sample was observed originating from the remote IP address 109.235.51[.]110 to a single internal IP address within the government agency. 

The IP was associated with the domain flux2key[.]com, and the malware’s command and control (C&C) server IP address 109.235.51[.]153 was associated with the domain string2me[.]com, both known WINDSHIFT domains. 

Palo Alto Networks says that several other WINDTAIL samples originating from 109.235.51[.]110 were observed being directed at the same internal IP address from January through May 2018. 

All related WINDTAIL samples were Mac OSX app bundles in zip archives. One of them had C&C server IP address 185.25.50[.]189, which was associated with the domain domforworld[.]com at the time of activity.

Palo Alto Networks says it “assesses with high confidence that both the IP address 25.50[.]189 and the domain domforworld[.]com is associated with WINDSHIFT activity. Additionally, the IP addresses 109.235.51[.]110 and 109.235.51[.]153, corresponding to the previously validated WINDSHIFT domains flux2key[.]com and string2me[.]com, respectively, were also observed in use during this campaign.”

One of the attacker-owned IP addresses (109.235.50[.]191) was previously associated with Operation Hangover (which was analyzed several years ago), strengthening the previously identified relation between Operation Hangover and WINDSHIFT activity.

Palo Alto Networks also believes the attackers were unable to establish persistence within the targeted environment, given the multiple inbound WINDTAIL samples directed at the same internal IP address. 

RelatedResearchers Draw Connections Between APTs

RelatedThreat Actor Targets Middle East With DNS Redirections

Copyright 2010 Respective Author at Infosec Island]]>