m Quec.lim's republished posts.http://quec.li/~m /matt [wronka.org]http://quec.es/org.wronka/matt/2016/02/08/Mon, 08 Feb 2016 10:01:26 +0000;matt [wronka.org]Mon, 08 Feb 2016 05:01:00 -0500I miss <a href="http://quec.es/t/kudos/">Kudos</a>: <a href="http://matt.wronka.org/pictures/gallery/temporal/2003-2007/2006/02/11/img_0853.jpeg">http://matt.wronka.org/pictures/gallery/temporal/2003-2007/2006/02/11/img_0853.jpeg</a>http://quec.li/EntryComments?feed=http%3A%2F%2Fquec.es%2Forg.wronka%2Fmatt%2Fsynd%2F&entry=Mon%2C+08+Feb+2016+10%3A01%3A26+%2B0000%3Bmatt+%5Bwronka.org%5Dmatt [wronka.org]http://quec.es/org.wronka/matt/2016/02/05/Fri, 05 Feb 2016 21:03:55 +0000;matt [wronka.org]Fri, 05 Feb 2016 16:03:00 -0500Blinding sheets of white<br /> Powder falling from the sky.<br /> I am clearing snow.http://quec.li/EntryComments?feed=http%3A%2F%2Fquec.es%2Forg.wronka%2Fmatt%2Fsynd%2F&entry=Fri%2C+05+Feb+2016+21%3A03%3A55+%2B0000%3Bmatt+%5Bwronka.org%5DSecurity Trade-offs in the Longbow vs. Crossbow Decisionhttps://www.schneier.com/blog/archives/2016/01/security_trade-_3.htmltag:www.schneier.com,2016:/blog//2.7796Fri, 22 Jan 2016 07:44:00 -0500<p>Interesting research: Douglas W. Allen and Peter T. Leeson, "<a href="http://www.peterleeson.com/Longbow.pdf">Institutionally Constrained Technology Adoption: Resolving the Longbow Puzzle</a>," <i>Journal of Law and Economics</i>, v. 58, Aug 2015.</p> <blockquote><p><b>Abstract</b>: For over a century the longbow reigned as undisputed king of medieval European missile weapons. Yet only England used the longbow as a mainstay in its military arsenal; France and Scotland clung to the technologically inferior crossbow. This longbow puzzle has perplexed historians for decades. We resolve it by developing a theory of institutionally constrained technology adoption. Unlike the crossbow, the longbow was cheap and easy to make and required rulers who adopted the weapon to train large numbers of citizens in its use. These features enabled usurping nobles whose rulers adopted the longbow to potentially organize effective rebellions against them. Rulers choosing between missile technologies thus confronted a trade-off with respect to internal and external security. England alone in late medieval Europe was sufficiently politically stable to allow its rulers the first-best technology option. In France and Scotland political instability prevailed, constraining rulers in these nations to the crossbow.</p></blockquote> <p>It's nice to see my security interests intersect with my D&D interests.</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.schneier.com%2Fblog%2Fatom.xml&entry=tag%3Awww.schneier.com%2C2016%3A%2Fblog%2F%2F2.7796Good rant on the current state of web development toolshttp://blogs.harvard.edu/philg/2016/01/21/good-rant-on-the-current-state-of-web-development-tools/http://blogs.harvard.edu/philg/?p=10302Thu, 21 Jan 2016 13:50:00 -0500http://quec.li/EntryComments?feed=http%3A%2F%2Fblogs.law.harvard.edu%2Fphilg%2Ffeed%2F&entry=http%3A%2F%2Fblogs.harvard.edu%2Fphilg%2F%3Fp%3D10302Fighting DRM in the W3Chttps://www.schneier.com/blog/archives/2016/01/fighting_drm_in.htmltag:www.schneier.com,2016:/blog//2.7787Thu, 14 Jan 2016 15:13:00 -0500<p>Cory Doctorow has a <a href="https://www.eff.org/deeplinks/2016/01/you-cant-destroy-village-save-it-w3c-vs-drm-round-two">good post</a> on the EFF website about how they're trying to fight digital rights management software in the World Wide Web Consortium.</p> <blockquote><p>So we came back with <a href="https://www.eff.org/pages/objection-rechartering-w3c-eme-group">a new proposal</a>: the W3C could have its cake and eat it too. It could adopt a rule that requires members who help make DRM standards to promise not to sue people who report bugs in tools that conform to those standards, nor could they sue people just for making a standards-based tool that connected to theirs. They could make DRM, but only if they made sure that they took steps to stop that DRM from being used to attack the open Web.</p></blockquote> <p>The W3C added DRM to the web's standards in 2013. This doesn't reverse that terrible decision, but it's a step in the right direction.</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.schneier.com%2Fblog%2Fatom.xml&entry=tag%3Awww.schneier.com%2C2016%3A%2Fblog%2F%2F2.7787The Internet of Things that Talks About You Behind Your Backhttps://www.schneier.com/blog/archives/2016/01/the_internet_of.htmltag:www.schneier.com,2016:/blog//2.7783Wed, 13 Jan 2016 06:35:00 -0500http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.schneier.com%2Fblog%2Fatom.xml&entry=tag%3Awww.schneier.com%2C2016%3A%2Fblog%2F%2F2.7783Comcast repeatedly crams modem upgrade demands into browsershttp://go.theregister.com/feed/www.theregister.co.uk/2016/01/12/comcasts_pestering_update_popups/tag:theregister.co.uk,2005:story/2016/01/12/comcasts_pestering_update_popups/Tue, 12 Jan 2016 14:40:00 -0500<h4>No, I'm quite happy with my own gear, says punter</h4> <p>Comcast subscribers are complaining that the broadband biz has been bombarding them with requests in their web browsers to get new cable modems.?</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.theregister.co.uk%2Fheadlines.rss&entry=tag%3Atheregister.co.uk%2C2005%3Astory%2F2016%2F01%2F12%2Fcomcasts_pestering_update_popups%2FMagnushttp://xkcd.com/1628/http://xkcd.com/1628/Mon, 11 Jan 2016 00:00:00 -0500<img src="http://imgs.xkcd.com/comics/magnus.png" title="In the latest round, 9-year-old Muhammad Ali beat 10-year-old JFK at air hockey, while Secretariat lost the hot-dog-eating crown to 12-year-old Ken Jennings. Meanwhile, in a huge upset, 11-year-old Martha Stewart knocked out the adult Ronda Rousey." alt="In the latest round, 9-year-old Muhammad Ali beat 10-year-old JFK at air hockey, while Secretariat lost the hot-dog-eating crown to 12-year-old Ken Jennings. Meanwhile, in a huge upset, 11-year-old Martha Stewart knocked out the adult Ronda Rousey." />http://quec.li/EntryComments?feed=http%3A%2F%2Fxkcd.com%2Frss.xml&entry=http%3A%2F%2Fxkcd.com%2F1628%2FReplacing Judgment with Algorithmshttps://www.schneier.com/blog/archives/2016/01/replacing_judgm.htmltag:www.schneier.com,2016:/blog//2.7778Fri, 08 Jan 2016 06:21:00 -0500<p>China is considering a new "<a href="http://www.bbc.com/news/world-asia-china-34592186">social credit</a>" system, designed to rate everyone's trustworthiness. Many fear that it will become a tool of social control -- but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.</p> <p>Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data -- and more algorithms -- we need to ensure that we reap the benefits while avoiding the harms.</p> <p>Right now, the Chinese government is watching how companies use "social credit" scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it's much more than a financial scoring system.</p> <p>Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors <a href="http://qz.com/519737/all-chinese-citizens-now-have-a-score-based-on-how-well-we-live-and-mine-sucks">abound</a> about how this system works. Various news sites are <a href="http://www.independent.co.uk/news/world/asia/china-has-made-obedience-to-the-state-a-game-a6783841.html">speculating</a> that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem <a href="https://www.techinasia.com/china-citizen-scores-credit-system-orwellian">overblown</a>, but could certainly <a href="http://www.antipope.org/charlie/blog-static/2015/10/it-could-be-worse.html">come to pass</a> in the future.</p> <p>This story has spread because it's just the sort of behavior you'd expect from the authoritarian government in China. But there's little about the scoring systems used by Sesame Credit that's unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it's often the byproduct. And if we're not careful, the creepy results we imagine for the Chinese will be our lot as well.</p> <p>Sesame Credit is largely based on a US system called FICO. That's the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you're offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you've been at repaying your debt, how long your credit history is and so on.</p> <p>There's nothing about your social network, but that might change. In August, <a href="http://money.cnn.com/2015/08/04/technology/facebook-loan-patent">Facebook was awarded a patent</a> on using a borrower's social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you're more likely to be judged as one.</p> <p>Your associations can be used to judge you in other ways as well. It's now common for employers to use <a href="http://www.cleveland.com/business/index.ssf/2015/05/more_than_half_of_employers_no_1.html">social media sites</a> to screen job applicants. This manual process is increasingly being outsourced and automated; companies like <a href="http://www.forbes.com/sites/kashmirhill/2011/06/15/start-up-that-monitors-employees-internet-and-social-media-footprints-gets-gov-approval">Social Intelligence</a>, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The <a href="http://www.datasociety.net/pubs/fow/EmploymentDiscrimination.pdf">dangers</a> of this type of system -- from discriminatory biases resulting from the data to an obsession with scores over more social measures -- are <a href="http://www.fastcoexist.com/3036990/4-ways-using-big-data-to-hire-people-could-totally-backfire">too many</a>.</p> <p>The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.</p> <p>The US government is judging you as well. Your social media postings could get you on the <a href="https://theintercept.com/2014/07/23/blacklisted">terrorist watch list</a>, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist's tweet caused the US to <a href="http://newsfeed.time.com/2012/01/31/british-tourists-tweets-get-them-denied-entry-to-the-u-s">deny him entry</a> into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.</p> <p>All of these systems, from Sesame Credit to the NSA's secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn't scale. It made no sense for a bank across the state to give you a loan, because they didn't know you. Loans stayed local.</p> <p>FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don't need to know you; your score is all they need to decide whether you're trustworthy.</p> <p>This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don't conform to the financial norm -- having and using credit cards, for example -- can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.</p> <p>The <a href="https://aeon.co/essays/judge-jury-and-executioner-the-unaccountable-algorithm">secrecy of the algorithms</a> further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you're less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying "subversive" products or being friends with dissidents, you're more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.</p> <p>Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.</p> <p>Many have <a href="http://www.theverge.com/2013/3/11/4091282/muslim-new-yorkers-describe-nypd-surveillance-effects">documented</a> a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google "pressure cooker bomb"? How many are a bit worried that I used it in this essay?</p> <p>This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we're paving the way for the same sort of thing China plans to put into place.</p> <p>It doesn't have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It's not even hard.</p> <p>The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.</p> <p>The second step is for these systems to be subject to <a href="http://towcenter.org/research/algorithmic-accountability-on-the-investigation-of-black-boxes-2">oversight and accountability</a>. It's already illegal for these algorithms to have discriminatory outcomes, even if they're not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.</p> <p>We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it's by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.</p> <p>Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don't believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.</p> <p>This essay <a href="http://www.cnn.com/2016/01/06/opinions/schneier-china-social-scores/index.html">previously appeared</a> on CNN.com.</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.schneier.com%2Fblog%2Fatom.xml&entry=tag%3Awww.schneier.com%2C2016%3A%2Fblog%2F%2F2.7778DSA-3436 openssl - security updatehttps://www.debian.org/security/2016/dsa-3436https://www.debian.org/security/2016/dsa-3436Thu, 07 Jan 2016 19:00:00 -0500<p>Karthikeyan Bhargavan and Gaetan Leurent at INRIA discovered a flaw in the TLS 1.2 protocol which could allow the MD5 hash function to be used for signing ServerKeyExchange and Client Authentication packets during a TLS handshake. A man-in-the-middle attacker could exploit this flaw to conduct collision attacks to impersonate a TLS server or an authenticated TLS client.</p>http://quec.li/EntryComments?feed=https%3A%2F%2Fwww.debian.org%2Fsecurity%2Fdsa-long&entry=https%3A%2F%2Fwww.debian.org%2Fsecurity%2F2016%2Fdsa-3436The sloth is coming! Quick, get MD5 out of our internet protocolshttp://go.theregister.com/feed/www.theregister.co.uk/2016/01/06/get_md5_out_of_internet_protocols/tag:theregister.co.uk,2005:story/2016/01/06/get_md5_out_of_internet_protocols/Wed, 06 Jan 2016 16:47:00 -0500<h4>Researchers point to lingering hash function</h4> <p>The outdated and crackable MD5 hash function is still lingering in critical parts of the internet's infrastructure and could undermine security, researchers have warned.?</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fwww.theregister.co.uk%2Fheadlines.rss&entry=tag%3Atheregister.co.uk%2C2005%3Astory%2F2016%2F01%2F06%2Fget_md5_out_of_internet_protocols%2FThorny Breakfasthttp://www.flickr.com/photos/beezhive/23532611153/tag:flickr.com,2005:/photo/23532611153Sun, 03 Jan 2016 19:53:00 -0500<p><a href="http://www.flickr.com/people/beezhive/">beezhive</a> posted a photo:</p> <p><a href="http://www.flickr.com/photos/beezhive/23532611153/" title="Thorny Breakfast"><img src="http://farm2.staticflickr.com/1474/23532611153_c3a1e22f0c_m.jpg" width="240" height="160" alt="Thorny Breakfast" /></a></p> <p>A dark-eyed junco eating its breakfast after a snowfall in Arches National Park.</p>http://quec.li/EntryComments?feed=http%3A%2F%2Fapi.flickr.com%2Fservices%2Ffeeds%2Fphotos_public.gne%3Fid%3D92265804%40N00%26lang%3Den-us%26format%3Datom&entry=tag%3Aflickr.com%2C2005%3A%2Fphoto%2F23532611153