Thursday, March 31, 2016

Social Media Surveillance

What is Social Media? 

Ever since the advent of the Internet, human communication has drastically changed. The networking potential created by the Internet has allowed people from all across the globe to communicate instantaneously in ways that seemed impossible only 25-30 years ago. With the rise of the Internet social media sites have emerged: websites with the specific purpose of communicating with others, sharing ideas and information, and creating interactive communities to share user-generated content. Facebook, the most popular social media site, as of the fourth quarter of 2015 had 1.59 billion monthly active users.

Social media can take many different shapes and sizes, and can feature registered and anonymous users. Such social media outlets, such as the app Yik Yak, have come under fire recently for the content that has been posted on the app. Users posting anonymously have made racist, sexist, and otherwise offensive posts, as well as threats about shootings and terrorism. With the constant threat of terrorism and school shootings at the forefront of the minds of law enforcement and school administrators, postings about threats of violence and other offensive posts are not taken lightly. A survey of college officials in April 2015 showed that a majority of those responding monitored such public social media feeds. The question becomes: should school officials and law enforcement monitor public social media posts, and should they actively seek out those who make offensive or threatening posts?

Pros
On the surface, the benefits to social media monitoring are obvious: should there be threatening, offensive, or other questionable posts, school officials, law enforcement, and other positions of authority will be able see the posts, and act on them. In today’s world, potential attackers with strong social media presences may post about an attack, or hint at it. Even in cases where the poster isn’t serious about the threats they are making, it is impossible to tell someone’s intent without further follow up.  Some of these threats can be made over social media sites like Facebook and Twitter, where users are required to register and disclose information in order to make an account, which makes tracking down the poster easier. However, apps like Yik Yak, where users are able to post anonymously, have been hotbeds for offensive speech, and threats of violence as well. In cases where threats do occur, the anonymous nature of Yik Yak has not protected the identities of posters. Police have arrested multiple people who have used the app to threaten violence. While the police are required to provide a subpoena to get information about the posters, as they are otherwise anonymous, the timing of the arrests (hours after the post itself happens) shows that those running Yik Yak do not take these threats lightly. While the seriousness of the threats remains unknown, the proactivity of law enforcement has likely saved lives.

The benefits to monitoring social media are broader than just preventing violence and terrorism threats. Between September 2012 and September 2013, nine suicides in teenagers were linked to the last.fm social media site alone. Monitoring the social media and Internet activity of teenagers is one way to keep them safe. The Internet is a large, open space. As such, it may not be unreasonable to make sure that teenagers are not getting mixed up in trouble that they shouldn’t be. Additionally, the anonymity provided by the Internet can encourage people to say things that they wouldn’t in real life. This could be even worse for teenagers, as it gives an additional outlet for bullies to harass their victims. Being able to spot harassing posts, and posts about depression, self harm, and other red flags, and then intervene is something that could help prevent further incidents in the future. 

Cons
The proponents against such monitoring and the subsequent follow-ups cite free speech as the main reason postings should be left alone. On Yik Yak, aside from a legitimate threat or other call of violence, the app self regulates through an upvoting/downvoting system. If a post gets a score of negative five, it is removed. Much of the offensive or otherwise negative content gets filtered and self regulated through the community in this way. With this system in place, many offensive or otherwise unpopular posts will not last long. Obviously, trolls and those with hateful opinions aside, many people do not approve of hate speech. Policies like this keep the community a more regulated space, without additional involvement.

For example, to prevent the app from infiltrating the high school community, “geo-fences” have been placed around about 90 percent of high schools and middle schools – effectively preventing anyone from accessing the app from a location near a high school or middle school. This helps to prevent those who aren’t mature enough to handle it from getting to it. Cyberbullying is much more prevalent in middle schools and high schools than it is in college, so the anonymous nature of this app becomes all the more dangerous in the hands of those more likely to abuse it. Taking steps such as these helps to keep the app as it was intended, while still keeping some precautions to prevent it from getting out of hand.
Even in spaces like college campuses, where users can be deemed mature enough to access the app, there is still evidence of hateful speech. However, hateful speech is not illegal. Though it may not be encouraged or condoned by the app developers, its users, or third parties to the situation, people are still free to speak their minds. Were school administrators or law enforcement to seek out those who were making racist, misogynistic, or otherwise offensive posts, there is little they could do to enforce it. The app can be banned via school’s wifi networks, but it is mostly a symbolic gesture, as the app would still be available through cellular data. At state schools, freedom of speech is protected under the first amendment of the constitution.
This also sets a potentially dangerous precedent, and could start a slippery slope. If one part of speech is censored, what will come next?

My Opinion
I see both sides of the argument, and I think there are merits to both sides. I am definitely a proponent of identifying those who make threats of violence and terror threats. As mentioned in the post, there is no way to know if the threats are serious or not, and I don’t think that we can afford to err on the side of leniency with regard to these posts. I think that public social media can be monitored, and not intervened on unless the situation calls for it. Especially when it comes to younger, less mature users of social media sites. However, I think that free speech in all other circumstances should be honored. I don’t approve of hate speech, but I do not think it is right to censor it. I also think that in this age of Internet trolls it would be a waste of resources to go after anyone who says something offensive over the Internet. The Internet is home to so many controversial posts, opinions, and people, and I think it’s important to understand that not everyone will say or do nice things, especially if they are under the veil of anonymity. However, just because someone’s feelings are hurt does not mean we need to seek out and reprimand the offender. What are your thoughts?

Question of the Week No. 11

Cyber bullying, student violence at school and teenage suicide is a growing concern in grades K-12 in schools across the nation.  Some schools are monitoring the social media posts of students in an effort to combat these problems and require students to disclose their social network passwords to school officials.  Many students and parents oppose such monitoring, citing an invasion of student privacy.  Is such monitoring sound public policy in today’s digital world?

Friday, March 25, 2016

Week 10 Takeaways


1.      A jury awarded Hulk Hogan $150 million in his sex tape case. $55 million for economic harm, $60 million for emotional distress, and $25 million for punitive damages against Gawker and Nick Denson, the owner of Gawker. It’s likely that the judge will reduce the amount of money given to him because Gawker will argue that the amount the jury awarded is far in excess of how much the tape actually hurt him and because there was some evidence that was not displayed in court. I didn’t know that judges could reduce the amount of money given to someone by a jury.

2.      An Exabyte is the largest unit of information that we have. It is equal to 1 billion gigabytes. Up until 2003, we had produced 5 exabytes total. Now we produce 5 exabytes every ten minutes. That number will keep increasing as more and more things become internet based like our cars, houses, thermostats, etc. Maybe we will even have to come up with another “byte” measurement larger than exabytes.

3.      A study done with nameless Facebook posts was able to reidentify people with a 95% accuracy, given 3 other data points about the person combined with their Facebook posts. This is important because we have been talking in class a lot about whether or not data can accurately be linked back to people and what kind of protection/anonymization that data needs. We also learned that there is a difference between information that personally identifies you as an individual and aggregate data that is not linked directly with a person. 

4.      HB 300 passed which states that law enforcement must come up with written policies about their use of body cameras within their work. There are minimum standards about their usage based off the legislation, beyond that the cities must come up with their own. According to the bill, the footage is not classified as public or private and law enforcement must balance between public and private interest on a case by case basis.

5.      HB 358 passed as well. This recognized that the existing laws in regards to student privacy are insufficiently protective. It requires the state board of education to develop a data governance plan mainly focused on security. The educational institution must have a data management plan and the vendors they utilize must have adequate privacy safeguards as well. It also creates a state student data officer and recognizes that individually identifiable data is owned by the individual student.

6.      We discussed the challenges that go into creating regulations for data brokers like what exactly are data brokers, transparency between the brokers and the consumers, access to the data, sensitive information, inferences made based off the data, incorrect inferences, data security standards, consent, consumer education, and enforcement. The Data Broker Access and Transparency Act is a federal bill that is designed to get the ball rolling as far as regulating data brokers goes. A lot of their solution is based around a website that outlines standards and punishments for data brokers, as well as a consumer education section for the public.


Thursday, March 24, 2016

Do Not Track

Do Not Track

In many of our modern browsers, we see the option of “Ask websites not to track me” (Safari v9) or “Send a ‘Do Not Track’ request with your browsing traffic” (Chrome v48) but what does this option do? 

Do Not Track (DNT) is a small piece of information that is sent along your HTTP request when you click on a webpage. This is a single DNT signal to be maintained on a browser so it does not require you to get DNT cookie from each individual advertiser. Do Not Track tells the website and their third party content providers (such as advertisers) that you wish to not be tracked for advertising purposes.

What is Web Tracking

Web tracking is when the website uses special software and cookies to keep tabs on their visitors. These tracking services can be used to improve the online experience to tailor ads to the consumer. While visiting the website of Opentracker, a company that provides tracking tools and other analytical tools, there was a little widget that shows the potential information that can be tracked, such as your location, the website you came from, number of visits, and total pages viewed. 

Websites like Amazon can then take this information and use it to predict and suggest products to you. First party tracking are tracking done by the website you are on. Many websites like Facebook, Amazon, Google, etc. will store cookies, or small text files assigned to your browser once you've visited the website. These cookies are helpful to ensure that you are logged in to your online account or your settings are restored. However, tracking by a third party, like an ad server, uses cookies to recognize the same user across different websites. When you visit the New York Times, you might get ads for shoes if you had searched for it earlier.


Implementation of Do Not Track

In 2007, several public interest groups, including the World Privacy Forum, CDT, and the EFF asked the FTC to create a Do Not Track list for online advertising. In their proposal, the interest groups asked the FTC to "Create a national Do Not Track List similar to the national Do Not Call List." Nothing came from the request until 2010 when the FTC Chairman Leibowitz tells Senate committee that FTC is considering a DNT list. Later that year, the idea of using a HTTP header instead of cookies or a list gained widespread acceptance. In 2011, Mozilla Firefox was the first browser to implement the DNT header and shortly after, Microsoft Internet Explorer, Google Chrome, and Apple Safari shortly followed. In 2012, support for DNT came from the FTC, the White House, and the Digital Alliance Surveillance. The W3C, a international Internet setting group comprised of all interested stake holders gathered to formulate an agreed upon international standard for a header based DNT standard. However, there were arguments amongst the members and a consensus could not be reached. After nearly 4 years, the group issued a modest proposal in 2015 that calls for networks and companies to honor a Do Not Track request in limited circumstances. 

The implementation of Do Not Track has been riddled with issues. In 2012, users who used the "Express" setting while installing Windows 8 enabled a Do Not Track option by default for Internet Explorer 10 and Windows 8. Advertisers bashed Microsoft for setting it as default and quickly announced that they would ignore the DNT request because it makes the consumer's choice for them. The W3C also criticized Microsoft's decision. In 2015, Microsoft announced that as of Windows 10, it would not default to Do Not Track while using the express settings. However, the damage has done and many privacy advocates say that the backfire from Windows 8's default opt-out approach killed DNT. 


Effectiveness of Do Not Track

Most major browsers include a Do Not Track option, however, website owners or advertisers can ignore the request minimizing its effectiveness. A majority of websites on the Internet does not honor the DNT signal. However, some major sites like Twitter and Pinterest have committed to honor DNT signal (click here for a list of sites that honor DNT). 

In 2011, the Digital Advertising Alliance developed a Do Not Track system of their own which allows users to affirmatively opt-out of targeted advertising by logging in to the AdChoices and clicking on an icon. The icon links to video about the values of interest based advertising and then displays another link which the users can click to opt out receiving interested based ads from some or all DAA members. However, a study conducted by Parks Associates found that three years after the introduction of the AdChoices icon, most consumers were unaware of it, and awareness had grown only to 6% in 2013 from 5% in 2011. 


In June of 2015, Consumer Watchdog petitioned the FCC to require edge providers (like Google, Facebook, YouTube, Pandora, Netflix, and LinkedIn) to honor Do Not Track Requests from consumers. However, the FCC ruled that they will not force the edge providers to honor consumer Do Not Track requests saying that they don't intend to "relate the Internet, per se, or any Internet applications or content."

What Can You Do as a Consumer

There are many options for consumers to protect their privacy from trackers. Many browsers offer the possibility of installing extensions to enhance the browser's function. Some extensions can be used to block any traffic from trackers.

Ad blockers such as uBlock (Origin), AdBlock, and Adblock Plus allows users to block out ads and filter out trackers. Extensions like Ghostery and Disconnect automatically blocks third-party scripts used for tracking you, like google Analytics, Intercom, social sharing buttons and more. The EFF's Privacy Badger is "born out of [EFF's] desire to be able to recommend a single extension that would automatically analyze and block any tracker or ad that violated the principle of user consent."


Author's Thoughts

I think that we can't rely on organizations to protect our privacy, we have to take the first step. In my browser I have uBlock and Ghostery set up to block any intrusive ads and trackers. I believe that while DNT is a great idea on paper, it's implantation has been lackluster. With advertisers not honoring requests and no incentives or real purpose for them to do so, DNT is slowly dying.

Question of the Week No. 10

Should data brokers be legally required to disclose to consumers what information they have compiled on an individual and to whom the information has been sold?

Friday, March 18, 2016

Data Broker Legislation

Data brokers, who are they and how do they impact your privacy?
Data brokers work for companies that collect information about people from a wide range of sources in order to make a personal profile. These profiles are then sold to companies that desire personal information in order to market product.
Data brokers sell complied personal data to other companies (including other data brokers), organizations, government agencies, or other persons.  In some cases, they exchange this information under cooperative arrangements rather than sell it.  In other instances, they provide the information at no cost, making money through advertising or referrals.
The Federal Trade Commission (FTC) has defined data brokers as “companies that collect information, including personal information about consumers, from a wide variety of sources for the purpose of reselling such information to their customers for various purposes, including verifying an individual’s identity, differentiating records, marketing products, and preventing financial fraud.”  Protecting Consumer Privacy in an Era of Rapid Change (March 2012) at page 68.


Where do data brokers get your personal information?
  1. Data brokers can get information from a wide range of public records such as court filings, real property and tax assessor records; mortgages, driver’s license records, motor vehicle records, voter registrations, telephone directories, real estate listings, birth, marriage, divorce and death records, professional license filings, recreational (hunting and fishing) licenses, and census demographic information.
  2. Self reported information such as contest entries, sweepstakes and warranty cards.
  3. Social media such as Facebook. Data brokers can use these sites to gain access to a user's name, gender, location, and level of education.  
  4. Cooperative arrangements in which companies will exchange existing information about their customer for additional information gathered by data brokers.
  5. Buying information from other data brokers, retailer or financial institutions. This may include consumer's’ web browsing activities from online advertising networks, data about purchases from retailers, catalog companies and magazines and data from websites where consumers register or login to obtain services, such as retail, news, and travel sites


Privacy concerns
The general population is largely unaware that their data is being collected and stored.  Data broker companies prefer it this way. Data brokers are largely unregulated. If the population had a better understanding of what was being done with their data, it is likely there would be more concern.  
A good example a data brokers company is Ebureau. This is a company you may remember from a video by Professor Dryer.  This company is one of the top data collecting companies. Before this class I had never heard of it. Ebureau knows everything about us. Ebureau creates what is called an Escore, an Escore is like a credit score, except a Escore contains a lot more information. This information is then sold to companies who use it to decide if someone will be a profitable customer.
Companies can use this information to discriminate against users. They don’t just use cold hard facts either. Data brokers are notorious for inferring details based on the information gathered. Here is an example: if a user belongs to a data segment called “Biker Enthusiasts” that offers motorcycle related coupons to its customers, an insurance company using that same segment might infer that the consumer engages in risky behavior. Thus, information compiled by data brokers can seriously affect someone's life yet only one of the major data broking companies allows users to correct inaccurate data that has been compiled about them.
There are benefits of data brokers. Data brokers help create targeted ads which are efficient to both the company and the consumer. Data brokers also help prevent fraud. Four of the major data broking companies sell risk mitigation products. These products help companies ensure that Jane Doe of 123 Main street who wants to buy a boat is actually Jane Doe.


What is being done to protect user privacy?
The FTC first became concerned with the inner workings of data brokers in 1990. After conducting a thorough investigation the FTC suggested to congress that something should be done to increase transparency in the data broking business. Despite this, no legislations was enacted.
In 2012 the FTC tried to redirect interest in data brokers by issuing a report. This report called for more transparency in the data broker business, this report suggested that data brokers create a centralized website that anyone could access. This website would identify data broker companies, detailing their data gathering methods, as well as how the data are being used. This website would also give users the opportunity to correct incorrect data on them or opt out of having their information used completely.  
The FTC also issued administrative subpoenas to nine data broker companies: Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf, and Recorded Future. This subpoena requires these companies to respond to a detailed set of information requests. The Orders requested detailed information regarding the data brokers’ practices, including the nature and sources of consumer data they collect; how they use, maintain, and disseminate the data; and the extent to which the data brokers allow consumers to access and correct data about them or to opt out of having their personal information sold or shared. Their response has not yet been made public.
Congress has had a much more proactive responses to the FTC’s latest findings then it did in 1990. In 2014 a bill was introduced in an effort to protect consumer privacy. This bill did not make it through the Senate but a similar bill was introduced in 2015 by Edward Markey.
The “Data Broker Accountability and Transparency Act of 2015.”
“Prohibits data brokers from obtaining or causing to be disclosed personal information or any other information relating to any person by making a false, fictitious, or fraudulent statement or representation, including by providing any document that the broker knows or should know to: (1) be forged, counterfeit, lost, stolen, or fraudulently obtained; or (2) contain a false, fictitious, or fraudulent statement or representation.”


Senator Markey stats that the data broker industry is a  "shadow industry of surreptitious data collection that has amassed covert dossiers on hundreds of millions of Americans, Data brokers seem to believe that there is no such thing as privacy." In addition, co-sponsoring Senator Richard Blumenthal (D - Conn) thinks that brokers are "insidious, invisible threats" to privacy.
The Direct Marketing Association, a trade group that represent data brokers, believes that brokers are taking steps on their own to improve transparency and that the industry should be self-regulating.
While the Data Broker  Accountability and Transparency Act requires that the FTC set up a website for consumers to make some decisions regarding their personal data, it requires the  FTC to proffer specific rules about how this is done. This is a standard method used by the Congress to get things done. The Congress lays out concepts and a plan of action and then requires a department of the executive branch to specify regulations through a public process.
There are two failings to this approach. The first is that the public process involved in rulemaking favors industry. In order for consumers to contribute to the making of rules and statutes through a public process, they must be involved and knowledgeable. This is difficult for individual consumers. On the other hand, industry has resources and can hire analysts and lobbyist to engage in these processes, follow process and influence the making of statutes.
The second failing is that industry self-regulation is an historic myth. The history of industry in the United States is that of aggressive marketing and innovation. This has led to centuries of economic growth. Occasionally however, government regulation is necessary. Information is a commodity and can be privately owned and traded on the market. But personal informations is naturally owned by individuals and should be traded at the behest of the owner and under conditions the owner specifies.
The Data Broker Accountability and Transparency Act, therefore is flawed in that it doesn’t specifically recognize that individuals own their own data. The act should require that data be kept by the individual unless contracted for otherwise.

Thursday, March 17, 2016

Question of the Week No. 9

Should an individual have an unqualified legal right to control the collection, use, access and retention of personal information about them and their activities?

Saturday, March 12, 2016

Week 9 Take Aways


  • Apple/FBI Litigation Update
    • Interested parties file “amicus” briefs supporting both Apple and the Government.
      • Help the court understand the broad implications of how they might rule. They either support the plaintiff or the defendant in the case
      • 17 Silicon Valley tech companies have filed an Amicus brief in support of Apple saying that the All Writs Act was used improperly or impelling Apple to write code would be in violation of the 1st Amendment.
      • Law enforcement coalitions also filed Amicus brief in support of the FBI.
      • 6 families have also filed briefs.
    • AT&T and Verizon called on Congress to address issue of encryption
    • Wall Street Journal editorially supports Apple.
  • Current Developments
    • Silent Circle: Blackphone 2 released. Skyrocketed pre-order. Encrypted phone
    • CryptTalk: Encryption in voice. You can just download an app that encrypts your phone conversations
    • Smallest drone with a camera: $69
      • Skeye Nano
    • British company shoots net to catch drones. Accurate to 100 ft
  • Results for Question of the Week: Should the FTC require all advertisements for “smart devices” to list possible privacy and security risks
    • YES 5
    • NO 5
  • How can the FTC best minimize privacy risks with smart devices
    • Mandatory privacy warning in ads? 0 people
    • Mandatory privacy warning on devices or packaging? 0 people
    • Comprehensive consumer education? 3 people
    • Requiring adequate privacy safeguards to be built into devices? 7 people
    • When asked to choose 2 of these options. Most of the people went with the third and the fourth option.

  • Regulating Internet of Things
    • The two groups voted that the FTC should regulate most of these except for number 3 (FTC/FTC mandates best practices), 4 (Market place), 7(Market place), 9 (Industry best practices/FTC), 10 (Industry best practices), and 11 (Market place).


Friday, March 4, 2016

Week 8 Take-Aways

1. The USA Freedom Act of 2015 made three changes to the FISA court:
a.       The appointment of 5 amicus curiae to advise the FISA court in making decisions on significant or novel interpretations of the law to protect privacy and the public interest. They are only called in at the request of the court and are not an overview board.
b.      Novel or significant interpretations of the law must be made public to the extent that is practical.
c.       The Director of National Intelligence must make all past court decisions public to the extent that is practical.
2. Two motions have recently been filed in the Apple v. FBI case:
a.       A motion to vacate the magistrate’s order, filed by Apple
b.      A motion to compel Apple to create the software to unlock the iPhone, filed by the FBI
3.  EPSN and journalist Adam Schefter are being sued by Jason Pierre-Paul for invasion of privacy. Adam Schefter released a photo of Pierre-Paul’s medical record describing his amputation. The privacy claim is based on the fourth privacy tort “public disclosure of private facts”.
4. The current “status quo” method of appointing judges to the FISA court was unanimously disapproved of. The class was split almost evenly (3-4-4) among the other three options. Each option presents advantages and disadvantages in terms of accountability, representation, and political influence.
5. There are multiple concerns with the set-up of the FISA court, as well as multiple reforms proposed to address some of those concerns. Common concerns raised were:
a.       Rubber Stamping – the court has a very high (>90%) approval rate. This could be explained by the court simply being a blanket rubber stamp for the government or it could be because only proposals with a good chance of approval are even brought before FISC.
b.      Lack of transparency – the court’s proceedings are currently carried out ex parte. There is a call for greater public access to the court, but this must be weighed against the need for secrecy for the sake of national security.

c.       Accountability – there is no oversight of the court, either from the public or within the court. The amicus curiae were a start at creating an advising board which takes the public’s interests into account, but they must be invited to the court, which is only required for novel or significant interpretation of a law.

Thursday, March 3, 2016

Privacy and the Internet of Things

What is the “Internet of Things”?
As technology progresses, and connected devices get cheaper and smaller, a new kind of Internet enabled device has emerged. Formerly the realm of computers and cell phones, now everything from your washing machine, your television, to even your car, is connected to the Internet. Collectively, these devices are referred to as the “Internet of Things”. Connecting these devices to the Internet allows users to collect data and issue commands to them remotely. As I will explain, the Internet of Things can be a powerful force for good, with potential to save money, time, or even extend our lives. However, like most innovations, with it there comes potential misuse, and we ultimately must ask ourselves how these devices must be regulated.

The Benefits of the Internet of Things:
The benefits of the Internet of Things are incredibly far reaching and diverse. Nest, a recent acquisition by Google, provides an IoT enabled thermostat that is designed to adapt to your schedule. After a few days of use, it will automatically start changing the temperature in your house according to your routine. It’s also internet enabled, allowing you to control your thermostat remotely. Other systems provide a centralized control point for all of your IoT devices, allowing the savvy user to create custom relationships between the huge amount of data your IoT devices collect. You could program it to unlock your door when your car enters the driveway, turn on your lights at a specified time, or even send an E-Mail, with attached photos from your cameras, if it detects unauthorized movement. And since there are IoT versions of a huge number of household objects, from lightbulbs, to outlets, to coffee makers, the combinations are astronomical.
The IoT realm doesn’t exist exclusively in the home, however. Internet enabled cars have become incredibly common, becoming a 47 billion dollar market in 2015. Today's cars can provide data on road conditions, vehicle diagnostics, and even get updates to the car’s software “over the air”, through cell phone networks. Tesla has even in the past improved the performance of their cars through these over the air updates, increasing the acceleration of their P85D models. Recently, through these updates, Tesla has implemented a feature called “Summon”, which allows you to call your car to your location from a parking lot.
On top of all the convenience, the IoT has the potential to improve our health. St. Jude Medical released a pacemaker known as the Accent, which broadcasts metrics from a patient’s body over the internet to their doctor. The FitBit and Jawbone wristbands track our heart rate to help us better exercise. The Internet of Things has huge potential to enrich our lives through data. But unfortunately, it also has huge potential for invading our privacy.

The Drawbacks of the Internet of Things
In its FTC Staff Report on the Internet of Things, the Federal Trade Commission outlined several potential risks that threaten consumers when use of IoT technology becomes sufficiently prevalent. For example, typically we give consent (either implicitly or explicitly) when data about us is collected through some terms of service. However, IoT devices are often small and don’t have screens, complicating this. In some cases, we might not even be aware of the data even being collected. Another thing that the FTC was particularly worried about when it comes to IoT collected data was the potential that “companies might use this data to make credit, insurance, and employment decisions”. Advocates of IoT data in insurance claim that it would provide more accurate coverage, better matching a person’s insurance premiums to their risk level. I would argue that setting a precedent to turn over this kind of data is dangerous. If the Third Party Doctrine stays as it is, then having insurance companies require citizens to turn over their IoT data  gives law enforcement functionally warrantless access to a person’s driving and other habits. This concept applies not only to the data you might turn over to an insurance company, but even to the companies that collect the data. Law enforcement would no longer have to get a warrant to track your vehicle, they could just subpoena that information from your car maker (presuming the car maker tracked and kept this information). They would no longer need to post a security detail to determine when you come and go from your home, they could simply request the information from Nest.
Even ignoring the potential for government overreach, the concept of companies collecting and storing data from your IoT devices is uncomfortable. Even if this data was released in an “anonymous” format, studies have shown that it doesn’t require very much work to link an anonymous dataset back to an individual. For example, in an MIT study researchers were able to link anonymous cell phone metadata to specific users with 95% accuracy, using only 4 known location-time data points. Researchers at the University of Austin were able to partially de-anonymize a dataset of netflix ratings by cross referencing the ratings with IMDB. If this is possible for location history and movie ratings, imagine what sort of findings could be derived from the massive amounts of IoT data that has the potential to be collected.
While the potential uses for collecting data is uncomfortable, the potential for IoT devices to be insecure is dangerous. Using SHODAN, a search engine for unsecured devices connected to the internet, a savvy user can access any number of things, from security cameras, to HVAC and power controls, to even security system controls. There have been documented examples of hackers gaining access to baby monitors and security systems for malicious purposes, and some manufacturers are failing to respond when asked to patch the issue. These risks aren’t just restricted to the home, either. In July, two hackers revealed that they were able to remotely control a 2014 Jeep Grand Cherokee, gaining access to everything from the windshield wipers and radio, to the transmission and brakes. All this access was remote, and could be done to any vehicle across the country. Another hacker revealed a device which, if hidden near a car accessed through GM’s OnStar app, could allow him to take control of the vehicle. Finally, a team of hackers was able to gain access to a TrackingPoint self aiming rifle. After gaining access to the rifle, the team was able to modify the targeting computer to keep the rifle from firing, or even to change the rifle’s target.

Potential Legislation, and My Opinion.
In its 2015 report, the FTC believed that legislation specifically pertaining to the IoT was “premature”, and rather encouraged the implementation of “self-regulatory programs“, relying on the companies to self police in order to prevent security breaches. However, the FTC also called for “strong, flexible, and technology-neutral federal legislation to strengthen (the government’s) existing data security enforcement tools“. On the privacy front, the FTC also recommended, instead of targeting the IoT specifically, there should be “baseline privacy standards” (likely based around the Fair Information Privacy Principles outlined in our reading) that apply to all technology. I tend to agree with the FTC. I believe that the problems brought up the the IoT are simply just extensions of existing problems within the technology industry. There is already a massive amount of data collected about us by online services such as facebook and Google, which I would argue poses the same risks as IoT data. The Target and Home Depot data breaches show us that a lack of data security is not a concept that exclusively impacts the IoT. Rather than waste time trying to fix the problems of a subset of the technology industry, I think that time would be better spent trying to fix the problems of the industry as a whole.
What do you think? Should there be laws specific to the IoT, or should they be left up to more general laws, or even just left to self regulation?