Tuesday, May 2, 2017

Is Zero Trust best safeguard for your systems?


Originally published April 16, 2017  http://bit.ly/2qzGAUc


Written by Stephen Smith


Ransomware, malware, and phishing—executives within financial services have become all too familiar with these 


To combat cybercrime, banks must adopt a variety of strategies, ranging from the required security protocols set out by federal regulators to exploring new cybersecurity models.
One innovative approach banks could begin to pursue is incorporating elements of a “Zero Trust” model into their security posture.
Zero Trust is an ideal that treats all attempts at accessing the network as suspicious until proven otherwise. While Zero Trust is not required by regulators, bankers can use its philosophy to strengthen their existing protections and build toward a more secure IT environment.
What is Zero Trust?
Zero Trust, a term originally coined by Forrester*, is a data-centric network model that puts “micro-perimeters” around specific data so that granular rules can be enforced. In practice, this means that banks rigorously inspect all network traffic, both internally and externally.
This method forces a bank’s IT team to look at even their most trusted employees as possible threats. Today’s criminals often target high-level employees, and they are often successful, even when targeting well-trained individuals. For example, in the case of Ubiquiti Networks, the company lost $46.7 million due to CEO spoofing, in which the attacker impersonated the CEO via email and authorized a wire transfer to an account owned by the attacker.
Zero Trust is based on three core concepts:
1. Secure and verify all data assets and resources.
2. Strictly control and limit access.
3. Inspect and log all traffic.
The process includes identifying, segmenting, monitoring, and protecting sensitive data. This process also provides an extra level of security and a competitive advantage due to the level of trust established with customers, employees, and partners.
Identifying critical data
The first step in working toward a Zero Trust model is identifying critical data sets in relation to all the data on the network—banks can’t build protection if they are unaware of what they have. They must know every piece of critical data, where it resides on the system, how long it has been there, who has access to it, and who is the “custodian” of those data sets.
Today, banks manage massive amounts of data, ranging from catering menus for a special event to customers’ personal information. The latter, not the former, would be considered a critical data set and would need specific protocols built around it to ensure its security. However, because some critical data must be accessible by customer-facing staff, it becomes less guarded than other information.
This expanded access to critical data often makes certain employees prime targets for hackers. And while most employees have no ill-intent and do not willingly participate in phishing, it can happen to anyone.
Even those most knowledgeable about phishing and cybersecurity threats fall victim to attacks, as seen when hackers breached highly trained Pentagon officials. What looks like a routine or harmless email could open the floodgates for hackers to gain access to all critical data sets within a financial institution.
This is why it is important to ensure every staff member is up to speed on both cyber threats and security processes associated with the bank’s cybersecurity protocols.
Dissecting risk
The next element of a Zero Trust model that banks can pursue is dissecting risk, which is where rules and privileges come into play.
Through each network, certain rules are set in place as the first layer of defense for critical data, with privileged access coming after trust has been established. While the rules help silo any suspicious content from the network in general, privilege is what allows individuals access to certain vital information through role-based access controls (RBAC).
In many financial institutions, especially community banks, this is especially difficult due to the multiple responsibilities many employees manage.
For instance, one person may be both a teller and a part-time bookkeeper, with additional responsibilities in Marketing. Setting RBAC limits can provide difficult because it is hard to define what role such employees specifically fill.
This challenge has only increased with the advent of the “universal banker” position, where an employee manages a wide array of customer-facing and back-office tasks.
A challenge for IT departments is defining and implementing controls as well as managing regulations, while not going overboard with protocols that make daily operations onerous. A delicate balance exists between protecting critical data sets and granting employees proper access to network information.
One defense tactic that Zero Trust recommends to combat this issue is setting clear data-retention policies. This means that every piece of data ultimately has an expiration date, even the food menu saved from the last catered event. This way, a phishing attempt may very well be deleted from the network before the criminals have an opportunity to unleash the malicious program.
Defending risk
While establishing a full-scale Zero Trust model in a bank is both costly and challenging, financial institutions can break down the model, and implement those pieces that best support their individual risk profile and cybersecurity needs.
One of the first steps could be as simple as assigning custodians to be responsible for certain sectors of critical data. These custodians review the data under their area of responsibility and ensure the proper data restrictions are applied.
After assigning custodians, another tactic banks can implement is logging, which is recording and inspecting all network traffic.
Easily said. However, because this includes everything from routers and firewalls to printers and cell phones, this is an enormous task. The average-sized bank can generate tens of millions of logs a day. And beyond the sheer scope of network traffic, processing all of those logs and identifying suspicious activity is a huge challenge due to the cost associated with being able to analyze and inspect all of the items plugged into their networks.
However, logging is possible: Many banks work with a managed services provider (MSP) to help defray the costs and staffing needs for the IT department. To establish Zero Trust protocols, banks also can work with an MSP to determine which devices require top-tier security and authentication. While ideally every device should be logged, banks often have to prioritize which devices apply.
Another Zero Trust tactic is implementing or subscribing to security monitoring services that allow the bank’s cybersecurity team to digest all of their data.
Such services enable the bank to create access controls and determine who is looking at that data and when, as well as process and identify behavior that contradicts established policies.
For instance, User A is not assigned to a particular role, but accesses critical data assigned to that role. Why? An effective monitoring service will flag this activity so that the bank can follow their policies and procedures in order to address the issue.
Thwarting cybercriminals
In today’s cyber-threat landscape, experienced and well-funded cyber crooks work non-stop to steal vital information from banks—and they’re doing so by attacking those with the most access to sensitive information. Where traditional security approaches may fail to protect critical data, the Zero Trust model might be the best approach to keeping the network secure.
While this may not be immediately feasible due to the cost and complexity associated with the model, it is important to begin implementing pieces of the Zero Trust model to enhance the protections already in place.



By going above and beyond what regulars require, banks can begin implementing new, innovative strategies that ensure their networks have the strongest lines of defense possible methods of cybercrime.
Each year, banks spend an increasing amount of money and time fighting criminals who are attempting to steal valuable data.

Homeland Security Research says the U.S. financial institution cybersecurity market will exceed $68 billion by 2020. My own company’s 2017 Banking Priorities Study indicates 57% of banks expect to increase spending on cybersecurity initiatives this year.

Friday, August 1, 2014

Use Vulnerability Scanning to Thwart the “Next Big Thing

by Stephen Smith

Originally published July 30 2014 @  http://bit.ly/1qQojk9


Remember the Heartbleed bug? Of course you do. For most of us, the widespread vulnerability that came to light in April has left an indelible mark on our memories. In fact, depending on how well your institution has prepared its network and systems to handle such events, you might still be cleaning up the resulting mess.
If that’s the case, you wouldn’t be alone. Consider this all-too-typical example: there’s a bank that’s equipped with two to four public-facing servers, one firewall, one intrusion prevention system and 2,000 internal network hosts. When Heartbleed broke, that institution would have been tasked with manually checking each of those to determine their vulnerability to the bug—equaling weeks to months of work, with no guarantee that all problems would be caught. That is, unless the institution had enlisted a regular vulnerability scanning service.
Frequent vulnerability scanning—at least monthly—is a proactive measure that catches weaknesses before they cause harm. Through a scanning solution, a trusted vendor can quickly scan your entire infrastructure, then provide a thorough report that lists every host on your network that’s exposed to a vulnerability, so remediation can begin.
Scanning for network weaknesses once was considered optional, but times have changed. In our environments today, we have so many systems that are plugged into each other—servers, networks and IP-based devices—that vulnerability scanning should be part of day-to-day operations.
There are many options available—from the one-off scan that only tells you how bad your network is—to a monthly subscription, which also provides a trending analysis as well as reports for the board of directors and regulators. It’s also important to note that regular scanning procedures mirror the actions taken by auditors in their reports to regulators.
A proper vulnerability scan can quickly check hundreds, even thousands, of files for network vulnerabilities, as well as such host-based weaknesses as misconfigured file permissions, over-exposure to public networks, missing patches and problems with commonly exploited applications like Web and mail servers.
The most comprehensive scans also will:
  • Perform credentialed configuration auditing of most Windows, Unix and network device platforms
  • Complete non-intrusive scans to avoid network interruptions
  • Deliver risk-level threat scoring for remediation prioritization
  • Supply customized scan configurations for consistency and replication
  • Execute remediation of external vulnerabilities
  • Perform remediation of internal vulnerabilities utilizing a score-based method to prioritize most vulnerable systems
  • Supply reports illustrating historical trending based on previous scanning
So, what will that next big threat be? We shiver to think about it. But for now it’s important for your institution to be armed with information about vulnerabilities in your infrastructure, before auditors—and worst yet, cybercriminals—beat you to it.

Sunday, May 18, 2014

Take These Steps to Stomp out the Heartbleed Bug

by Stephen Smith
Originally posted April 17th 2014 @ http://bit.ly/1mITCZu

The so-called “Heartbleed” bug has caused quite an uproar since its presence came to light on April 7. Uncovered within the OpenSSL (secure socket layer) network security protocol, this Internet security flaw allows attackers unauthorized access to sensitive server data, including private encryption keys. 
Though just discovered, the vulnerability has existed for more than two years. Further, the transport layer security (TLS) network security protocol may be vulnerable as well.
If an attacker gains access to cryptographic keys located in the memory of vulnerable servers and applications, they can be used to impersonate the site and collect such additional information as passwords.
The Heartbleed bug should be taken seriously, and financial institutions must complete the following steps in order to detect and remediate possible vulnerabilities:
Stage 1: Discovery
  • Scan all devices and network systems that might utilize SSL/TLS protocols. The vulnerability affects versions 1.0.1a through 1.0.1f of OpenSSL. Keep in mind that OpenSSL version 1.0.1g, the newest version, directly addresses this vulnerability. Further, versions older than the 1.0.1 line are not vulnerable.
  • Scan all possible common ports for SSL/TLS.
  • While public-facing services—those with a public IP address—are the most at risk and warrant top priority, scan private assets for this vulnerability as well.
Stage 2: Remediation
  • Update all versions of OpenSSL to 1.0.1g, and contact your vendors for all services that you do not directly control. Alternatively, you can have OpenSSL recompiled on compromised devices by enabling this flag:  -DOPENSSL_NO_HEARTBEATS.
  • If you locate any vulnerable devices or services, consider your private key compromised. Contact your certificate vendor to generate a new certificate as soon as possible.
  • Ensure you are using a unique certificate for each device or service you manage. In other words, for cases in which your organization has multiple websites and uses a single certificate for all of them, if one website is compromised, they all should be considered compromised. Avoid using a single “wildcard” certificate going forward.
  • After applying a new certificate to a service, consider having all users change their passwords.
Stage 3: Moving Forward
  • SSL/TLS best practices are well-documented. Qualys SSL Labs, for example, provides a full overview of best practices.
CSI is available to help our customers determine the presence of this vulnerability on servers and in services through such tools as vulnerability assessments. We also are on-hand to guide you through the remediation steps, which will vary greatly depending on individual circumstances.
Also, for those looking to learn more about the vulnerability and what steps should be taken to mitigate risk, we invite you to join us for a complimentary webinar on Tuesday, April 29, at 3:00PM CT. We’ll share insight into what financial institutions should do to safeguard their systems. You may register here.
Together, we will ensure the Heartbleed bug is stomped out of your systems for good.

Saturday, April 5, 2014

The Target Breach was Massive. What Should You Do?

by Stephen G. Smith
Originally posted Dec 20 2013 @  http://bit.ly/1ibCk1J

The timing for the 40 million-card Target data breach could not be worse—for consumers and financial institutions alike.

Since the breach was first brought to light by a security blogger on Wednesday, Dec. 18, Target has posted a letter to customers notifying them that the data exposed includes customer names, credit and debit card numbers and expiration dates, and card verification values. As of now, no significant PIN fraud has been reported, but that doesn’t necessarily mean PINs are in the clear.

We wanted to immediately disseminate that CSI is working proactively with our core customers to mitigate potential card fraud. To that end, we've already reissued more than 15,000 Visa debit cards to our customers. Our fraud monitoring capabilities have allowed us to accomplish this, despite the fact that Visa has yet to issue any compromised card lists.

It’s important to note that this investigation is in its infancy, and while many details remain unknown, there’s no better time to review the best practices for keeping your financial institution and its customers’ data safe.

First, an event of this magnitude underscores financial institutions’ need to employ a proper fraud monitoring solution, which provides 24x7 transaction screening and blocks suspicious activities, greatly reducing, even preventing, fraud losses. The most sophisticated solutions merge automation with skilled analysts who track trends, issue denials in real-time and quickly re-issue new cards to customers. They even can pick up on unusual activity by such variables as merchant type and geography, and deny authorizations from ever taking place.

Other crucial security controls include an updated intrusion prevention system, endpoint protection and regular network vulnerability scanning. Further, consider employing managed security services to monitor your outbound traffic behavior 24x7 and pinpoint such suspicious activity as large amounts of data leaving your network to an unknown destination.

In addition, keep communications with your customers open, and consider the following:

    Alert customers that the Target breach could prompt a rise in such social engineering techniques as phishing, whereby cybercriminals posing as their bank will contact them in an attempt to extract additional financial data. If consumers are panicking in a time like this, they could fall prey to these scams. Let your customers know you will never ask for such personal information as PINs unless they initiate the contact.
    Remind them to check their accounts daily, particularly if they shopped at Target during the pinpointed breach dates of Nov. 27 to Dec. 15. Any suspicious activity should immediately be reported to their financial institution.

The true breadth of this breach may remain unknown for weeks to come. For now, CSI is here to assist you and your customers in any way possible.

Friday, April 4, 2014

NIST Framework Provides Core Cybersecurity Guidelines

by Stephen G. Smith

The long-awaited set of cybersecurity guidelines from the Department of Commerce’s National Institute of Standards and Technology (NIST) was released in February, and hopes are high that it will provide a model to help U.S. businesses cost-effectively develop and maintain tools to manage increasing cybersecurity risks.

The NIST’s Framework for Improving Critical Infrastructure Cybersecurity, born from President Obama’s Executive Order 13636, outlines voluntary best practices for use by not only financial institutions, but also other business sectors including government and healthcare. It was developed from existing international standards and practices that have proven successful. While not meant to be a one-stop shop, the framework serves as a flexible and effective starting point for helping organizations map out high-level risk management concepts and connect them with regulatory rules and guidance—including today’s chief regulatory yardstick, the FFIEC’s IT Examination Handbook.

In fact, this baseline framework is sure to weather periodic revisions, but the more it matures, the more effective it will be and the more likely it will inspire required regulatory standards, thereby fostering increased consistency between the different regulatory agencies.

While the framework as a whole guides institutions in identifying key risk management tactics, its Appendix A presents the Framework Core in an easy-to-navigate tabular format, listing common activities for managing cybersecurity risk. The tables are broken into sections—function, category, subcategory and informative references—each more specific than the last—that institutions can use to customize a cybersecurity risk management program. The informative references section can be particularly useful, because it comprises the same specific standards that regulators and examiners use, and provides solid ground upon which to build a program.

The framework also can be used to develop a basic checklist to compare against current processes and procedures. This allows financial institutions to create and refer to a baseline to help prioritize information security dollars. In addition, their compliance partner can perform an information security assessment to further ensure compliance.

Over the coming months, the NIST will hold workshops to help organizations utilize the framework as well as review the efficacy of this original version.

So time will tell if enough organizations use this framework to help foster a heightened cybersecurity defense strategy for the nation as a whole. If you’re unsure of your level of risk management, the framework is a good place to start.