My colleague Steve Helland and I were talking this week about data privacy and security at a meeting of the firm’s Privacy group. Steve chairs the firm’s Internet, Technology & E-Commerce group and he recently co-chaired a full day conference Data Privacy and Security for In-House Counsel for the Minnesota State Bar Association. Our group discussed Steve’s takeaways from the conference and I asked whether we could post his summary of the event on the blog. As you can see, Steve agreed.
The following post and checklist were written by Steve Helland and adapted from his presentation on March 21, 2013 at the MSBA data privacy and security conference. Many thanks to Steve for his contributing post…
You can’t do it all, in a field as robust and evolving as data privacy and security. The purpose of this checklist is to describe the core oversight duties of those in the board room and the C-suite, as-of spring 2013. As such, this checklist is focused primarily on setting values and priorities, and the assignment of roles, structure, and process.
Please note: (1) There is no one-size-fits all, so consider the unique circumstances of your organization; (2) Although much has been written about privacy and security generally, law and scholarship specifically regarding the duties of the board and senior management regarding privacy and security issues is significantly less developed.
□ Decide, preliminarily, the relative importance of privacy and security issues to your organization.
Comment: Consider the following:
(1) Are you in a highly-regulated field such as finance or healthcare?
(2) Do you control or have access to large amounts of data?
(3) Are trade secrets or other proprietary information especially valuable assets?
(4) Importance of customer expectations and public perception?
(5) What are your competitors doing?
(6) Any known substantial and specific threats / risks?
Benchmark: Corporate directors (48%) and general counsel (55%) listed “data security” as their number-one concern (ahead of operational risk and company reputation). Source: 2012 Corporate Board Member / FTI Consulting, Inc., “Law and the Boardroom Study: Legal Risks on the Radar.”
□ Allocate reasonable financial, human, and technical resources.
(1) Do you have confidence in your IT team / CIO?
(2) Do they have a sufficient budget?
□ Philosophy: Treat trade secrets, “Big Data,” and other critical proprietary information with the same level of care and attention you devote to the preservation and growth of other core assets.
□ Appoint a [Chief Privacy Officer (CPO)][Chief Information Security Officer (CISO)][other management-level person with “privacy and security compliance” as an explicit or sole component of the job description].
(1) For this item, like virtually all others on the checklist, the minimum duty will vary with the size of the organization and the quantity and type of information and data held (including whether the industry or data-type is regulated, such health organizations under HIPAA or financial organizations under Gramm-Leach Bliley, or any entity collecting information from children on-line under COPPA.
(2) This person should monitor for compliance requirements: (a) applicable law; (b) contractual obligations (e.g., in NDAs or security provisions in other agreements); (c) your own policies; (d) certification / compliance programs in which you participate (e.g., EU Safe Harbor, TRUSTe); (e) industry norms, as following short may be negligence).
Benchmark: Among smaller and mid-size organizations, a dedicated Chief Privacy Officer is still relatively rare.
□ Retain [or at least identify] experienced legal counsel.
(1) Receive updates on legal developments from time to time.
(2) Involve in substantial transactions such as M&A and key vendors.
(3) If there is a substantial international component to your data and security issues, strongly consider retaining country-specific or region-specific legal counsel.
□ Retain [or at least identify] computer forensic consultants; other consultants such as PR.
(1) In the event of a breach and/or an event that may involve litigation, I recommend an outside computer forensic firm.
(2) This item may be most appropriate for larger organizations.
(3) This item is more appropriate to a CIO or General Counsel, and not the board-level.
□ Assign a committee of the board with oversight of privacy and security issues, and explicitly add responsibility for privacy and security to the committee’s charter. Consider creating a committee if no appropriate committee exists. (e.g., a “Risk Committee” (or similar) for which privacy/security could be one aspect of enterprise risk.)
Comment: Applicable for larger entities. This could also be housed in a Risk Committee, Compliance Committee, or other committee of the board. Smaller entities may prefer keep this function within the full board.
Benchmark: Among Global 2000 entities, 96% have an Audit Committee, 56% have a Risk / Security Committee, and 23% have an IT / Technology Committee. Source: “Governance of Enterprise Security: CyLab 2012 Report,” Jody R. Westby.
□ Receive information. The board and senior management should receive periodic reports and information from the CIO, IT and General Counsel regarding significant security risks, issues, breaches, and other items.
Comment: The board of directors and senior management should receive enough information to be familiar with the organization’s top privacy and security issues and how the organization is managing those items.
□ Conduct an audit. Include administrative, technical and physical elements.
(1) Oversight by full board or a committee such as the Audit Committee.
(2) Self-audit vs. outside audit?
(3) Brand-name audits such as (old) SAS70 (new) SSAE 16?
(4) If possible, benchmark your organization against similar organizations to avoid falling behind (negligence for failing to meet industry-standard).
(5) Do you know what your own policies are and do you follow them?
(6) Do you comply with contractual or similar obligations to others (e.g., abide by NDAs; Payment Card Industry requirements).
(7) Focus on the most important assets.
□ Written policies. Then communicate and train.
□ Agreement tool kit.
Comment: Make available solid templates for: NDAs or similar with employees, vendors, partners. Specialized agreements as required such as Business Associate Agreements under HIPAA. The agreement tool kit should be disseminated to appropriate personnel with contracting authority, along with training in how to use, plus report and track exceptional terms and requirements.
□ Diligence on key vendors and partners. How are their practices? Any breaches?
Comment: This may be as simple as a Google search: you don’t want to be partners with a known data-bungler. Include privacy and security diligence as part of M&A and other major transactions.
□ Review insurance coverage.
Comment: Is general liability, errors and omissions sufficient? Consider “cyber risk” or “privacy liability” coverage (there’s a difference between these two). Be cautious regarding exclusions, especially “force majeure” / “act of God/war,” in light of foreign-government-sponsored hacking.
Benchmark: Only 35% of public companies have cyber insurance. Source: Chubb 2012 Public Company Risk Survey.
□ Revisit privacy and security issues from time to time; stay current.
□ Insure at least one member of the board is knowledgeable in IT issues.
Comment: If your full board still isn’t sure what the Internet is and doesn’t use email, they will not be in a position to critique inputs on all of the above.
Thanks so much to Steve for his contributing post!
We are addressing data privacy and security with our clients on a regular basis in many different areas and industires (e.g. employment and trade secret – healthcare and financial services, and many more). So now that you have gone through Steve’s checklist, where do you all stand when it comes to data privacy and security? As always, we would love to hear from you.
We have been discussing the risks personal devices can pose for business data corruption, loss or theft quite a bit of late. These issues were also highlighted at the RSA Security Conference (a gathering of security industry experts) and we have focused our attention to online security, personal information privacy, and business data risks.
So, let’s review. In IBM’s Plan to Manage Smart Phone Security Issues – Not Just About “Is Siri and Apple Spy?”, we reviewed different protocols and procedures for managing employee use of personal electronic devices. We talked about the need for businesses to recognize and adapt to a corporate life with BYOD because – let’s face it – personal devices are here to stay. We firmly believe that with policies, education and training employees should at least gain a minimal understanding of the potential security danger of commingling personal and business data, the vulnerability of unauthorized electronic intrusions (See our post: And Yet Another Security Risk to Mobile Devices . . . Malware), and the ultimate cost to a business for lost or stolen data, including trade secrets. These steps can also protect your organization should you be required to remote wipe a device that is lost, stolen or “removed” by a departing employee.
What we have seen, unfortunately, is that even with the best policies, education and training, no service or device is fully secure – whether the result of state sponsored hacking of U.S. companies by other governments, or cyber intrusions by groups like Anonymous. Security vulnerabilities exist. This is but a short list of some of the recent security breaches: Google’s two-step login verification process was bypassed allowing control of a user’s account; Evernote, a Web-based note-sharing service, reset 50 million users passwords following an attack into users’ accounts; Facebook, Apple, Microsoft and Twitter have reported recent cyber-attacks; Like Evernote, Twitter reset the passwords for 250,000 accounts whose encrypted passwords may have been accessed; and Dropbox, an electronic storage service, reported a large loss of data for a number of subscribers. (For more information, see NBC News, Evernote resets 50 million passwords after hackers access user data, Google patches ‘loophole’ in two-factor verification system, and His firm accused China of hacking the US; now he awaits the consequences).
The problem is that once an employee removes corporate data from the network, protecting and securing that data becomes much harder. “My peers are killing me,” John Oberon, information technology chief for Mashery, a 170-employee company that helps other companies build applications, reported to the New York Times, Where Apps Meet Work, Secret Data Is at Risk. “[T]here’s only so much you can do to stop people from forwarding an e-mail or storing a document off a phone.” (This is still one of the main ways employees take data…) And employees will find their own ways to connect with one another. Indeed, Netflix recently found its employees using 496 applications for data storage, communications and collaboration. Yikes. “People are going to bring their own devices, their own data, their own software applications, even their own work groups,” said Bill Burns, director of information technology infrastructure at Netflix. The question becomes what are you doing as an organization to monitor, limit or otherwise control what employees are doing on their devices? Is it enough?
And what if the security dilemma is really not the employee’s fault? HTC America, a global manufacturer of devices, recently settled a complaint with the Federal Trade Commission. The FTC alleged that HTC America failed “to take reasonable steps to secure software” in its Android, Windows Mobile and Windows Phone smartphones and tablets. According to NBC News, HTC subject to 20 years of security reviews because of holes, the FTC reported that “[t]he company didn’t design its products with security in mind.” “HTC introduced numerous security vulnerabilities that malicious apps could exploit to gain access to sensitive data and compromise how the device worked.” Even worse, the FTC alleged “HTC pre-installed a custom app that could download and install apps outside of the normal Android permission process.” To settle the FTC matter, HTC America agreed to create and push software patches to millions of its mobile devices, and to accept independent security assessments for the next 20 years. This case represents the first time the FTC has pursued a mobile device company over security concerns, or ordered a company to create and push a software fix as part of a settlement.
In the end, whether caused by employees or by device manufacturers, security issues cost businesses money. Security concerns can waste valuable IT time and money, and more importantly hurt a business’ reputation with its customers. So, what are you doing? I have been talking with CIO’s and industry experts to gain different perspectives and options for addressing data protection and security concerns. I will post some conclusions and suggestions in the weeks to come. In the meantime, we would love to hear what you are doing.
According to a recent survey by Symantec, roughly “half of employees who left or lost their jobs in the last 12 months kept confidential corporate data” and “40 percent plan to use it in their new jobs.”
That headline should be enough to stop any employer in their tracks. But there’s more. Not only did employees take confidential information from their employers, they apparently didn’t even feel guilty about it. On the contrary, 51% said it was “acceptable to take corporate data because their company does not strictly enforce policies” and 62% said that it is “acceptable to transfer work documents to personal computers, tablets, smartphones or online file sharing applications” with a majority saying they never delete such data “because they do not see any harm in keeping it.”
Clearly, companies need to be doing more to protect their data and intellectual property. Confidentiality and data security policies, while an important first step, are only the foundation to protecting confidential and trade secret information.
As with many things in life, actions speak louder than words. In addition to implementing appropriate policies, businesses need to back up those policies with actions. It’s important that employees (and managers) receive training on what information is confidential, why it’s confidential, and why confidentiality matters to the company. It is also critical that companies actually treat confidential information like it’s confidential, by, for example, implementing appropriate security protocols (i.e. passwords, restricted access, monitoring, etc.). While these are basic steps, they are important and, according to the Symantec study, they are still too often being overlooked.
Another tool to protect your company’s confidential and trade secret information is to have your employees sign confidentiality and nondisclosure agreements. Those agreements should be updated to reflect today’s technological advances, as well as to address new employee uses of technology. Too often, employers don’t think about confidential or trade secret data stored on personal mobile devices or personal computers until after an employee has resigned or been terminated. By then, it can be too late to get that important data back.
So what does this all mean to you? In short, if appears from the Symantec survey that employees are still not getting the message about who owns your company’s data. Therefore, if you don’t take additional steps to educate your employees and protect your confidential or trade secret information, it may just walk out the door. What have you been doing to protect your data? As always, we would love to hear from you.
Do you worry about the security of your online life, but never take any action? For example, you know that you shouldn’t use the same password for your email accounts that you use for your online banking, but for whatever reason you can’t quite bring yourself to make the change? Well, you’re not alone, but this fascinating (and scary) article from Wired.com may be enough to push you to take action.
In How Apple and Amazon Security Flaws Led to My Epic Hacking, the author, Mat Honan, describes what happened to him. “In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.”
Like I said, a nightmare come true! According to Mr. Honan, “[i]n many ways, this was all my fault. My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened.”
Now, stop right there … two-factor authentication? What’s that? According to Google, “2-step verification helps protect a user’s account from unauthorized access should someone manage to obtain their password. Even if a password is cracked, guessed, or otherwise stolen, an attacker can’t sign in without access to the user’s verification codes, which only the user can obtain via their own mobile phone.”
If you have a Gmail account and you haven’t enabled 2-step verification, perhaps now is the time. And it’s not just me preaching this advice, it’s gone viral. See, for example, James Fallows of the Atlantic’s post, “Turn On Gmail’s ’2-Step Verification.’ Now.” In addition to citing the Wired.com article, Mr. Fallows’ wife has dealt with her own hacking horror story, when “when six years’ worth of [her] email — and associated photos, research notes, book drafts, calendar info, contacts, attached-file data, memorabilia, etc. — were all zeroed out by a hacker, who was using the ‘Mugged in Madrid’ scam and was probably operating from West Africa.”
What was particularly fascinating about Mr. Honan’s article, however, was the fact that he ended up in a direct message exchange on Twitter with one of the actual hackers (a 19 year old using the name Phobia). Mr. Honan asked him why he was targeted and the hacker said that they just liked his Twitter handle. “That’s all they wanted. They just wanted to take it, and f*** sh** up, and watch it burn. It wasn’t personal.” I think Mr. Honan would take issue with that, especially because he lost every single photo from the first year and a half of his daughter’s life.
Luckily, it appears that some of the security concerns raised by Mr. Honan regarding Amazon and Apple are already being (quietly) addressed (See Amazon Quietly Closes Security Hole After Journalist’s Devastating Hack and After Epic Hack, Apple Suspends Over-the-Phone AppleID Password Resets). But the article still raises a lot of scary issues.
From an employer’s perspective, this article also serves as yet one more reminder to make data security a priority. What if Mr. Honan were your employee and had been using his MacBook to work on a sensitive company project? With employees using multiple devices for both personal and business reasons, a hacker can breach not only the security of someone’s personal information, but confidential company information as well.
Have you been hacked? What happened and what did you do to put your digital life back together? What steps do you (or your company) take to protect your security online? We’d love to hear from you.
About a month ago, online media outlets were all aflutter about IBM’s demand that its employees turn off Siri on their iPhones. IBM feared that the iPhone’s voice-activated assistant, “who” uploads your queries and user data to Apple’s servers, could reveal confidential or sensitive business information. While I agree this is a potential problem, and admit that I now rarely use Siri, I think the media hype missed a much bigger point – IBM’s disclosure provides an outstanding opportunity to analyze how this Fortune 500 company deals with employee use of personal smart phones and tablets while managing the complexity of corporate security. So, let’s dig a little deeper!
In MIT’s Technology Review, IBM Faces the Perils of “Bring Your Own Device”, Jeanette Horan, IBM’s Chief Information Officer, described what actions IBM took when it started to let employees use their personal smart phones and tablets for work purposes. While IBM still furnishes a staggering 40,000 BlackBerrys to a small segment of its employees, some 80,000 workers reach internal networks using other types of smart phones and tablet devices. Here are just a few of the lessons we can learn from IBM’s security policies (we are happy to see that they mirror the advice we provide to our clients during seminars on information security!):
- Recognize and acknowledge that your employees will use their personal electronic devices for company use. Ignoring this trend may lead to corporate security breaches, and the potential for lost information and money.
- Understand that employee use of personal devices will not save company money. Companies will spend as much or more on IT security than the cost of company owned smart devices. The trend simply poses new challenges because personal devices are filled with software not controlled by the company. (See our post: And Yet Another Security Risk to Mobile Devices … Malware).
- Understand that your employees understand next to nothing about electronic security. IBM surveyed its employees and found many employees were “blissfully unaware” of what popular apps did, and the potential security risk for each. (What would a survey of your employees reveal?)
- Establish guidelines about which apps employees can use and which to avoid. IBM developed a list of banned applications, and tried to insure that employees understood why these products are dangerous to internal corporate security.
- Do not let employees auto-forward company emails to personal email addresses. IBM’s survey also revealed that employees violated protocols by automatically forwarding their company e-mails to public Web mail servers or using their phones for wi-fi hotspots, which poses a potential for unauthorized intrusion and snooping.
- Educate your employees as to why certain activities are inherently dangerous, and what harm may come to the company and its employees if there are unauthorized intrusions.
- Treat each individual employee and their devices differently. IBM created 13 different personas for the different types of its employees. The company then matches the persona to an employee. The higher the risk – the more security protocols required on the smart phone or tablet. While most companies don’t need 13 different personas, it is good practice to think about what risks are presented by different employees, and then develop standards for each group. Maybe your company only needs three or four different personas. To that end, well thought out, and conveyed, standards ultimately give your employees the tools to protect sensitive and secret information.
So how does IBM implement its policies? IBM requires each personal device be configured with appropriate security protocols by its IT department before an employee can use it. If the device is lost or stolen, the IT department can then wipe or erase the device remotely. IBM’s IT department also disables public file sharing platforms and Siri (now we come back to Siri.) Disabling these services limits the potential for accidental distribution of sensitive or secret company information.
Now what about Siri? The concern over Siri arises from how Siri-launched searches, e-mails, and queries are stored, and for how long. According to Apple’s iPhone Software License Agreement: “When you use Siri or Dictation, the things you say will be recorded and sent to Apple in order to convert what you say into text.” Siri also collects other information – names of people from your address book and other unspecified data. But why does Apple want this information? Apple won’t say. The user agreement simply says: “By using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation, and other Apple products and services.”
While some believe that Siri is not spying on you – but simply “learning” from you, other experts are not so sure. What prevents Apple from trolling important corporate information from competitors, and using it to its advantage in developing new products and services? Nothing. The sky may not be falling, but it is a little naïve to believe that corporate spying and espionage do not exist.
In the end, employee owned smart devices are here to stay. Your company’s IT department will ultimately need to address issues of security, ownership and the like. As I like to remind people, it is much better to address security issues proactively rather than after a major breach. So, does your company need to review its mobile device policies or start by at least implementing some? Let us know – we always like to hear how our readers will consider or implement the tools we discuss.
This post is the last in a series of three addressing recent social media surveys. If you recall, last week we discussed the findings of a new survey conducted by TELUS and the Rotman School of Management. That survey concluded that an outright ban on social media usage increased a business’ risk for cyber intrusion by approximately 30 percent. (A New Twist on Business Security – Banning Social Media Can Increase Security Breaches?) Well, as you may know, there really is no definitive answer to the question of how much access employees should be given to social media. Case in point, another study conducted in July 2011 by Ponemon Institute, a research firm, and Websense, Inc., concluded that as a company’s social media usage increased so too did the firm’s risk for viruses and malware. Don’t these two surveys appear to conflict?
The Ponemon study, as reported in Bloomberg Law, Facebook, Twitter Increases Companies’ Security Risks, found that more than one-half of the businesses surveyed reported an increase in cyber-attacks as a result of employee’s usage of social media networks. Approximately, 25 percent of the companies experienced a 50 percent increase in attacks. What drove the results of the Ponemon and Websense survey? The global study reported that as social media usage played a larger role in a business’ practice, many organizations found themselves ill-equipped to deal with the accompanying security risks. Researchers discovered that only 35 percent of the firms worldwide had a social media usage policy in place, and of those with a policy, only 35 percent enforced it.
“A lot of the organizations still didn’t have an acceptable use policy,” said Larry Ponemon, chairman and founder of Ponemon Institute. Of those businesses with a usage policy in place, Mr. Ponemon told Bloomberg Law that “a policy that isn’t vigorously enforced isn’t meaningful.” As co-author Norah Olson Bluvshtein noted about social media training (only 27% reported conducting social media training to employees) in her post New Statistics on Social Media At Work – Who’s Using It and Is It Effective? – employers still have a long way to go on implementing appropriate and effective policies.
How did most of the attacks reported in the Ponemon study occur? The study found that the attacks were “socially engineered driven“ – Bloomberg called it the “click-trick.” What does that mean exactly? Patrick Runald, a researcher at Websense, Inc., explained that users may be enticed to click on a video pop-up, for example, “which takes you to a page off of Facebook, where they trick you into downloading something.” With the download comes cyber viruses and malware.
So, do the surveys really conflict? No, not really. The Ponemon study simply confirms that a workforce which does not understand the dangers beneath the surfaces of many legitimate social media network sites poses a great risk to the business’ IT safety. As we discussed last week (A New Twist on Business Security – Banning Social Media Can Increase Security Breaches?) a workforce educated on the importance of cyber security and adherence to legitimate social media usage policies remains the best alternative to protect a business’ IT future. Not just a companywide review of the company’s cyber security policies, but a discussion with the employees of how, why and where the security breaches occur. A demonstration of how things like the “click–trick” work in the cyber-world, and that the malicious packages are simply waiting for the uneducated worker to download its viruses or malware.
We may sound a bit like a broken record here, but we have often preached that sound social media policies, a workforce educated about the importance of cyber security, and vigilance in the appropriate use of social media will put a company’s security risks in check. I believe the two studies discussed support this important point.
What do you think? Drop a line and let us know.
In the next week, we will write a series of posts on recent studies about the impact of social media use on business. The first involves the impact of banning access to social media sites on system security. As we often discuss with clients, it is important to consult marketing, human resources and IT (at the very least) when making decisions about social media use. We think these studies will demonstrate why.
TELUS, a leading provider of security research, and the Rotman School of Management conducted the study regarding computer security breaches. The study found that companies which banned employee use of social media sites, such as Facebook, were 30 percent more likely to suffer an IT security breach than those with a more lenient policy. On average, the firms which blocked social media sites experienced 10.3 security incidents over a 12 month period compared to 7.2 breaches for more lenient companies. Doesn’t seem to make sense does it?
Apparently, the director of security and risk consulting with TELUS agrees. He reported to itbusiness.ca (Facebook bans at work linked to increased security breaches):
It might seem counterintuitive, but the survey results confirm what we have been tracking over the last two years. No social networking policies are actually forcing users to access non-trusted sites and use tech devices that are not monitored or controlled by the company security program.
Do you think an outright ban on social media sites would actually work? Do outright bans ever work? I think not. According to an expert in the field, Walid Hejazi, professor of business and economics at Rotman, it’s the proverbial shut one door and a window opens. Professor Hejazi told itbusiness.ca, “[i]f users deem their actions are justified they will find ways to circumvent firewalls or bring their own devices to surf sites or even access files that they are not authorized to.” The study showed that a policy banning access may actually “force” users to access non-trusted sites.
Sounds like what we predicted in our posts Time Suck or Morale Booster? How Does Social Media Impact Employee Productivity? and So You Let Your Employees Use Smartphones For Work? Are You Being Smart About It? – that an outright ban on social media, or an employee’s use of a personal mobile device for work, is not necessarily the right answer. What is? Employee education and trust in your employees. A workforce educated on the importance of cyber security and adherence to policies remains the best alternative. Experts agree.
According to Etges (as reported to itbusiness.ca and supported by the study) :
True buy-in to security policies can only be achieved by educating employees and explaining to them the impact of unsecure practices. Our survey showed that when it is explained to workers that breaches impact the bottom line, customers and themselves, as much as three quarters of the employees were prepared to comply with security policies.
Are you surprised that an educated workforce would “buy-in” to security protocols if management explained the financial ramifications of a security breach? The logic seems pretty clear. By and large, I believe educated employees will normally do the right thing – whether it relates to cyber security, use of personal mobile devices or simply safe cyber practices. Certainly, however, the possible financial toll to employees’ compensation should be a potent reminder to the workforce that cyber security breaches cost money. Education and pocket book economics should drive the majority of workers to comply with security policies and procedures. (I just heard on the MPR (Minnesota Public Radio) that money motivates people to stick with weight loss plans, so why wouldn’t it work for following company policy?)
What do you think? Have you had success in educating a workforce on cyber safety issues? If so, let us know what you did, and how it went. We remain interested in how this will pan out. Look for our next posts, which may provide alternate views on this issue!
In a series of related posts in June 2011, So You Let Your Employees Use Smartphones For Work? Are You Being Smart About It?, Are You A Security Threat To Your Mobile Device? and And Yet Another Security Risk to Mobile Devices … Malware, we discussed a myriad of security issues related to employees’ personal use of work-provided smartphones. Well, technological salvation from these issues may be at hand. The solution – a split personality smartphone. According to CNN, To Protect Data, Phones Develop Split Personalities, many companies are currently presenting their own concept of dual purpose smartphones. These new security systems generally run on Google’s Android platform – one perceived as the most vulnerable to security breaches.
CNN reported that AT&T is in the process of introducing “Toggle”, a service that “separates an Android phone into personal and work environments, and the user can switch between the two.” That is, the user can set the smartphone to either work or personal mode, with each segregated from the other. Now that sounds like a great idea…
Perhaps a split-personality smartphone will solve some of the security issues raised by our posts above - such as the introduction of malware from untested or unstable apps, or the loss of data due to your child getting his/her sticky fingers on your phone. According to Beta Byte, AT&T’s Toggle Enables Bring-Your-Own-Device Management,
“the personal mode functions without restrictions; the work mode is secured and can only run approved applications. Data for both modes is kept separate. The whole system is managed company-wide from a central Web portal. There, employers and IT admins can manage allowed devices and permitted applications, including performing features like remote administrative wipe.”
AT&T believes the “bring-your-device-to-work” trend is here to stay, and Toggle represents the “natural” solution to a growing security administration problem. (AT&T said its Toggle service will be introduced by the end of the year.) Enterproid, the tech company whose work Toggle is based upon, plans to introduce a version for the iPhone market. LG, Samsung and Research in Motion (Blackberry) are also in the process of developing technologies and platforms with split-personalities.
Time will tell whether these new services will produce the desired result of separating an employee’s personal smarthone use from work smartphone use. From my standpoint, I could see this becoming an issue for trade secret and/or misappropriation litigation. If companies do not implement a “split” of personal and business use of a smartphone, will someone argue the company has consented to the employee’s personal use of company information? Also, could this option simply provide another way for departing employees to pilfer confidential information? That is, the employee could insure that he/she has stored certain company information on the “personal” mode of the smartphone…when the company moves to remote wipe the business side of the device, the employee could still carry away company secrets. As always, we will keep you posted on what we hear.
In the meantime, have you heard of other smartphone technology that separates personal and work-related phone functions? If so, drop us a line.
Senator Introduces Bill to Protect Online Personal Information – How Will Your Security Systems Rate?
Senator Richard Blumenthal, a Democrat from Connecticut, introduced the Personal Data Protection and Breach Accountability Act of 2011 on Thursday. The aim of the bill is to protect personal information from online security breaches, as well as punish companies that act carelessly with customers’ information.
“The goal of the proposed law is essentially to hold accountable the companies and entities that store personal information and personal data and to deter data breaches,” Senator Blumenthal told the New York Times in Senator Introduces Online Security Bill. ”While looking at past data breaches, I’ve been struck with how many are preventable.”
The Act regulates how companies store online data. The rules would require companies that store data for more than 10,000 people to follow specific storage guidelines and ensure the correct storage of personal information. Companies that violate these guidelines could be subject to stiff fines.
Senator Blumenthal reported to the New York Times that, if the new bill passes, customers would be able to sue companies, like Sony, which do not take adequate precautions. (Remember the Sony breach put 77 million customers’ private information in jeopardy). Senator Blumenthal called the Sony data breach “a poster child” for the law, although the legislature had been working on the law prior to the breach. It will be interesting to see how this unfolds.
What type of online security do you have in place to protect customers’ and/or employees’ information? Does your industry need the federal government to dictate the types of guidelines necessary to protect personal information or will the market simply punish those entities that do not secure our personal information? What are your thoughts?
Teresa is the Chair of Fredrikson’s Non-Competes and Trade Secrets Group, and an MSBA Certified Labor and Employment Law Specialist. She counsels business clients on risk management and policy development relating to employee use of technology, and also litigates their business and employment disputes. Teresa trains, writes and lectures extensively on legal issues arising from business use of technology and social media.