Categories
RPG

See the Sketches J.R.R. Tolkien Used to Build Middle-Earth

Many of these are amazing, in that they show how one of the most adored fantasy world’s maps began just as those used in most homebrew D&D games.

Categories
Photography

Space Training

Photo made with Olympus EM10ii and M.14-42mm F3.5-5.6 II R lens at the Canadian National Exhibition on August 27, 2017 in Toronto, Ontario. Edited in Apple Photos.
Photo made with Olympus EM10ii and M.14-42mm F3.5-5.6 II R lens at the Canadian National Exhibition on August 27, 2017 in Toronto, Ontario. Edited in Apple Photos.
Categories
Links

Cider Profiles

 AV Club:

English ciders, for example, tend to be still, dry, and higher in alcohol than most ciders. (English ciders are often considered the red wine of the cider world.) Spanish ciders are more often compared to sour beers, with a funkier taste. French ciders are the most approachable of European ciders, as they have a champagne-like sparkle and are lower in alcohol content. Terroir isn’t all that differentiates European ciders from American ones, however, as their use of wild yeasts results in a bolder, more offbeat flavor profile.

American ciders are harder to pin down, as the unique processes brewers have been applying to craft beer—barrel-aging, hopping, the addition of spices and other fruits—are also being used by cider makers, resulting in a variety of different tastes. What most American ciders have in common, however, is lightness, crispness, and an easy-going approachability.

As someone who appreciates well-crafted beers and liquors, and has recently tried to get into cider, this is really helpful in orienting myself. Thus far I think my preferred kind of cider tends to be semi-experimental (I had a truly delightful gin barrel-aged dry cider earlier this summer) but knowing what to look for in flavour profiles is definitely helpful going forward.

Categories
Writing

Thoughts on 1Password ‘Home’ Edition

People are worried that someone’s going to steal their data or secretly access their personal devices. Border agents are accessing devices with worrying regularity. Travellers are being separated from their devices and electronic when they fly. Devices are stolen with depressing regularity. And then there’s the ongoing concern that jealous spouses, partners, or family members will try to see with whom their partner’s been emailing, Snapchatting, or Whatsapping.

Few people are well positioned to defend against all of these kinds of intrusions. Some might put a password on their device. Others might be provided by updates for their devices (and even install the updates!). But few consumers are well situated to determine which software is better or worse in terms of providing security and user privacy, or make informed decisions about how much a security product is actually worth.

Consider a longstanding question that plagues regular consumers: which version of Windows is ‘the most secure’? Security experts often advise consumers to encrypt their devices to prevent many of the issues linked to theft. Unfortunately, only the professional or enterprise versions of Windows offer BitLocker, which provides strong full disk encryption.1 These professional versions are rarely provided by-default to consumers when they buy their laptops or desktops — they get the ‘Home’ editions instead — because why would everyday folks want to encrypt their data at rest using the best security available? (See above list for reasons.)

Consumers ask the same security-related questions about different applications they use. Consider:

  • Which messaging software gives you good functionality and protects your chats from snoops?
  • Which cloud services is it safe to store my data in?
  • Which VoIP system encrypts my data securely, so no one else can listen in?
  • And so on…

Enter the Password Managers

Password managers all generally offer the same kind of security promises: use the manager, generate unique passwords, and thus reduce the likelihood that one website’s security failure will result in all of a person’s accounts being victimized. ‘Security people’ have been pushing regular consumers to adopt these managers for a long time. It’s generally an uphill fight because trusting a service with all your passwords is scary. It’s also a hill that got a little steeper following an announcement by AgileBits this week.

AgileBits sells a password manager called ‘1Password’. The company has recognized that people are worried about their devices being seized at borders or about border agents compelling people to log into their various services and devices. Such services could include the 1Password, which is pitched as a safe place to hold your logins, credit card information, identity information, and very private notes. Recognizing the the company has encouraged people to store super sensitive information in one place, and thus create a goldmine for border agents, AgileBits has released a cool travel mode for 1Password to reduce the likelihood that a border agent will get access to that stash of private and secret data.

1Password Home Edition

But that cool travel mode that’s now integrated into 1Password? It’s only available to people who pay a monthly subscription for the software. So all those people who were already skeptical of password managers and who it was very hard to convince them to use a manger in the first place but who we finally got to use 1Password or similar service? Or those people who resist monthly payments for things and would rather just buy their software once and be done with it? Yeah, they’re unlikely to subscribe to AgileBit’s monthly service. And so those users who’ve been taught to store all their stuff in 1Password are effectively building up a prime private information goldmine for border agents and AgileBits is willing to sell them out to the feds because they’re not paying up.

People who already sunk money into 1Password to buy the software are, now, users the 1Password Home version. Or to be blunt: they get the segregated kinds of security that Microsoft is well known for. It’s disappointing that in AgileBits’ efforts to ‘convert’ people to ongoing payments that the company has decided to penalize some of its existing user base. But I guess it’s great for border agents!

I’m sure AgileBits and 1Password will survive, just as Microsoft does, but it’s certainly is a sad day when some users get more security than others. And it’s especially sad when a company that is predicated on aggregating sensitive data in one location decides it would rather exploit that vulnerability for its own profit instead of trying to protect all of its users equally.

NOTE: This was first published on Medium on May 24, 2017.


  1. 1 Windows 8 and 10 do offer ‘Device Encryption’ but not all devices support this kind of encryption. Moreover, it relies on signing into Windows with a Microsoft Account and uploads the recovery key to Microsoft’s servers, meaning the user isn’t in full control of their own security. Unauthorized parties can, potentially, access the recovery key and subsequently decrypt computers secured with Device Encryption. ↩︎
Categories
Writing

When ‘Contact Us’ Forms Becomes Life Threatening

Journalists targeted by security services can write about relatively banal subjects. They might report on the amount and quality of food available in markets. They might write about the slow construction of roads. They might write about dismal housing conditions. They might even just include comments about a politician that are seen as unfavourable, such as the politician wiped sweat from their brow before answering a question. Risky reporting from extremely hostile environments needn’t involve writing about government surveillance, policing, or corruption: far, far less ‘sensitive’ reporting can be enough for a government to cast a reporter as an enemy of the state.

The rationale for such hyper-vigilance on the part of dictatorships and authoritarian countries is that such governments regularly depend on international relief funds or the international community’s decision to not harshly impede the country’s access to global markets. Negative press coverage could cut off relief funds or monies from international organizations following a realization that the country lacks the ‘freedoms’ and ‘progress’ the government and most media publicly report on. If the international community realizes that the country in question is grossly violating human rights it might also limit the country’s access to capital markets. In either situation, limiting funds available to the government can endanger the reigning government or hinder leaders from stockpiling stolen wealth.

Calling for Help

Reaching out to international journalism protection organizations, or to foreign governments that might offer asylum, can raise serious negative publicity concerns for dictatorial or authoritarian governments. If a country’s journalists are fleeing because they believe they are in danger, and that fact rises to public attention, it could negatively affect a leader’s public image and the government’s access to funds. On this basis governments may place particular journalists under surveillance and punish them should they do anything to threaten the public image of the leader or country. Such surveillance is also utilized when reporters who are in a country are covering, and writing about, facts that stand in contravention to government propaganda.

The potential for electronic surveillance is particularly high, and serious, when the major telecommunications providers in a country tend to fully comply with, or willingly provide assistance to, state security and intelligence services. This degree of surveillance makes contacting international organizations that assist journalists risky; when a foreign organization does not encrypt communications sent to it, the organization’ security practices may further endanger a journalist calling for help. One of the many journalists covered in Bad News: Last Journalists in a Dictatorship who feared his life was in danger by the Rwandan government stated,

[h]e had written to the Committee to Protect Journalists, in New York, but someone in the president’s office had then shown him the application that he had filled out online. He didn’t trust people living abroad any longer.” (Bad News: Last Journalists in a Dictatorship, 83-4)

Such surveillance could have taken place in a few different ways: the local network or computer the journalist used to prepare and send the application might have been compromised. Alternately, the national network might have been subject to surveillance for ‘sensitive’ materials. Though the former case is a prevalent problem (e.g., Internet cafes being compromised by state actors) it’s not one that international journalist organizations are well suited to fix. The latter situation, however, where the national network itself is hostile, is something that media organizations can address.

Network inspection technologies can be configured to look for particular pieces of metadata and content that are of interest to government monitors. By sorting for certain kinds of metadata, such as websites visited, content selection can be applied relatively efficiently and automated analysis of that content subsequently be employed. That content analysis, however, depends on the government in question having access to plaintext communications.

Many journalism organizations historically have had ‘contact us’ pages on their websites, and many continue to have and use these pages. Some organizations secure their contact forms by using SSL encryption. But many organizations do not, including organizations that actively assert they will provide assistance to international journalists in need. These latter organizations make it trivial for states that are hostile to journalists to monitor in-country journalists who are making requests or issuing claims using these insecure contact forms.

Mitigating Threats

One way that journalism protection organizations can somewhat mitigate the risk of government surveillance is to implement SSL on their websites, which encrypts communications sent to the organization’s web server. It is still apparent to network monitors what website was visited but not which pages. And if the journalist sends a message using a ‘contact us’ form the data communicated will be encrypted, thus preventing network snoops from figuring out what is being said.

SSL isn’t a bulletproof solution to stopping governments from monitoring messages sent using contact forms. But it raises the difficulty of intercepting, decrypting, and analyzing the calls for help sent by at-risk journalists. And adding such security is relatively trivial to implement with the advent of free SSL encryption projects like ‘Let’s Encrypt’.

Ideally journalism organizations would either add SSL to their websites — to inhibit adversarial states from reading messages sent to these organizations — or only provide alternate means of communicating with them. That might mandate email, and list hosts that provide service-to-service encryption (i.e. those that have implemented STARTSSL), messaging applications that provide sufficient security to evade most state actors (everything from WhatsApp or Signal, to even Hangouts if the US Government and NSA aren’t the actors you’re hiding from), or any other kind of secure communications channel that should be secure from non-Five Eyes surveillance countries.

No organization wants to be responsible for putting people at risk, especially when those people are just trying to find help in dangerous situations. Organizations that exist to, in part, protect journalists thus need to do the bare minimum and ensure their baseline contact forms are secured. Doing anything else is just enabling state surveillance of at-risk journalists, and stands as antithetical to the organizations’ missions.

NOTE: This post was previously published on Medium.

Categories
Aside Writing

Limits of Data Access Requests

Last week I wrote about the limits of data access requests, as they related to car sharing applications like Uber. A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

Research has repeatedly shown that companies are very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A recently leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, "moments when young people need a confidence boost." If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure."

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because actually mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing, even if Facebook stops this particular activity. But, also, part of the problem is Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

NOTE: This blog was first published on Medium on May 1, 2017.


  1. 1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook. ↩︎
  2. 2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally. ↩︎
  3. 3 Surely you don’t think that Facebook is only targeting kids, right? ↩︎
Categories
Writing

Uber and the Limits of Privacy Law

When was the last time that you thought long and hard about the information companies are collecting, sharing, and selling about you? Maybe you thought about it after reading some company had suffered a data breach or questionably used your data, and then set the worries out of your mind.

What you may not know is that most contemporary Western nation-states have established data protection and privacy legislation over the past several decades. A core element of these laws include data access rights: the right for individuals to compel companies to disclose what information the companies have collected, stored, and shared about them.

In Canada, federal commercial privacy legislation lets Canadian citizens and residents request their personal information. They can use an online application to make those requests to telecommunications companies, online dating companies, or fitness wearable companies. Or they can make requests themselves to specific companies on their own.

So, what happens when you make a request to a ride sharing company? A company like Uber? It might surprise you but they tend to provide you with a lot of information about you, pretty quickly, and in surprisingly digestible formats. You can see when you used a ride sharing application to book a ride, the coordinates of the pickup, where you were dropped off, and so forth.

But you don’t necessarily get all of the information that ride sharing companies collect about you. In the case of Uber, the company was recently found to be fingerprinting the phones its application was installed on. There’s some reason to believe that this was for anti-fraud purposes but, regardless, the collection of that information arguably constitutes the collection of personal information. Per Canadian privacy legislation, such information is defined as “information about an identifiable individual” and decisions by the Commissioner have found that if there is even an instant where machine identifiers are linked with identifiable subscriber data, that the machine identifiers also constitute personal information. Given that Uber was collecting the fingerprints while the application was installed, it likely was linking those fingerprints with subscriber data, even if only momentarily before subsequently separating the identifiers and other data.

So if Uber had a legal duty to inform individuals about the personal information that it collected, and failed to do so, what is the recourse? Either the Federal Office of the Privacy Commissioner of Canada could launch an investigation or someone who requested their personal information from Uber could file a formal complaint with the Office. That complaint would, pretty simply, argue that Uber had failed to meet its legal obligations by not disclosing the tracking information.

But even if Uber was found to have violated Canadian law there isn’t a huge amount of recourse for affected individuals. There aren’t any fines that can be levied by the Canadian federal commissioner. And Uber might decide that it doesn’t want to implement any recommendations that Privacy Commissioner provided: in Canada, to enforce an order, a company has to be taken to court. Even when companies like Facebook have received recommendations they have selectively implemented them and ignored those that would impact their business model. So ‘enforcement’ tends to be limited to moral suasion when applied by the federal privacy commissioner.1

But the limits of enforcement only strike to a part of the problem. What is worse is we only know about Uber’s deceptive practices because of journalism. It isn’t because the company was forthcoming and proactively disclosed this information well-in-advance of fingerprinting devices. Other companies can read that signal and know that they can probably engage in questionable and unlawful practices and have a pretty low expectation of being caught or punished.

In a recent article published by a summer fellow for the Citizen Lab, Adrian Fong argued that enforcing data protection and privacy laws on individual private companies is likely an untenable practice. Too few companies will be able to figure out how to deal with data access requests, fewer will be inclined to respond to them, and even fewer will understand whether they are obligated to respond to such requests or not in the first place. Instead, Fong argues that application stores — such as Google’s and Apple’s respective App stores — could include comprehensive data access rights as part of the contracts that app developers agree to with the app store owners. Failure to comply with the data access rights aspect of a contract could lead to an app being removed from the app store. Were Google and Apple to seriously implement such a practice then their ability to remove bad actors, such as Uber, from app stores could lead to a modification of business practices.

Ultimately, however, I’m not certain that the ‘solution’ to Uber is better privacy law. It’s probably not even just better regulation. Rather, ‘solving’ for companies like Uber demands changing how engineers and business persons are educated and trained, and modifying the grounds under which they’re rewarded and punished for their actions. Greater emphases on ethical practices and the politics of code need to be ingrained in their respective educational curriculum, just as arts and humanities students should be exposed in more depth to the hard sciences. And engineers, generally, need to learn that they’re not just solving hard problems such as preventing fraudulent rides: they’re also embedding power structures in the code they develop, and those structures can’t just run roughshod over the law that democratic publics have established to govern private behaviours. Or, at least, if they run afoul of the law — be it national data protection law or contract law — there will at least be serious consequences. Doing otherwise will simply incentivize companies to act unethically on the basis that there are few, or no, consequences for behaving like a bad actor.

NOTE: this was originally posted to Medium.


  1. 1 Some of Canada’s provincial commissioners do have order making powers. ↩︎
Categories
Writing

Feature Parity in Apple Notes

I have a love and occasional hate relationship with Apple Notes. And a mostly hate and kind fond memory relationship with my longstanding notes application, Evernote. So for the past few months I’ve slowly and tediously shifting a few thousand notes from one service to another.

This is the story of why, the joys and miseries of the decision, and what I hope Apple changes in future versions of its note taking application.

Evernote’s Trust and Pricing Deficit

Evernote has some serous problems to my eye. I like some of its features, such as the ability to search .PDFs and adding tags to different notes. But these features aren’t enough to overcome the baseline problem that I no longer trust Evernote with my content. There are two core reasons underscoring this lack of trust: the company’s questionable stance on users’ privacy and the company’s willingness to increase prices without providing a corresponding improvement in their services.

In case you missed it, Evernote announced a plan to have specific employees read the content their users added to their notes. The employee would be reading users’ notes to improve on the machine learning algorithms that Evernote was rolling out. Those algorithms, themselves, meant to improve the services provided to users.

So the company was only going to infringe on its users’ privacy for the best of reasons.

The company backed off from its decision pretty quickly in the wake of a media backlash. Nevertheless, the initial decision left a bad taste in my mouth. How could I trust a company that had so cavalierly indicated a willingness to intrude upon their users’ private content? Some people use Evernote for personal journaling, others to manage their businesses, some to store medical information, and yet others for their research and professional writing. On what possible grounds could anyone at a company based on storing people’s thoughts and dreams think it would be appropriate to have employees read potentially sensitive notes? I was already somewhat uneasy with the company but seriously started exploring ways out of their service following this particular privacy SANFU.

The second problem I had with the company was its decision to raise prices for professional users without providing a real benefit to end users. I get that companies sometimes have to adjust their pricing but as a long-standing user it seemed like I was being penalized after trusting the company in its infancy. It just seemed wrong to penalize very early adopters such as myself who’d championed the application from an early point in the company’s existence. There should have been a grace period, at the very least, if not an actual grandfathering of long term users’ prices.

So in the advent of these issues, combined with a decreasing enjoyment of the user interface and user experience more generally, I decided that I wanted out.

Enter Apple Notes

I’ve used Apple Notes off and on for a lot of years. And until the updates that came in iOS 9 I’ve generally stayed away. The service has just been deeply underwhelming in terms of its organization of different notes, to say nothing of the annoyances I had with sharing notes with other people.

The worst of those annoyances have been dealt with in a few ways:

  1. I can organize folders and use macOS to nest different folders in one another, which is essential for me to keep my notes in some semblance of order.
  2. I can search through notes with relative ease on all my Apple devices, though I admit this is an area where improvements would be delightful.
  3. I have more faith in Apple to push back against efforts to access my notes through a legal process, and to protect the privacy of my notes’ contents using best security practices.

Furthermore, I’m already paying for iCloud storage. As a result, shifting my Evernote documents to Apple Notes will likely leave me with a little more money in my bank account each year.

The actual writing experience in Apple Notes is a bit threadbare. That’s ok on the whole – the ability to add headings and titles, along with some baseline formatting is almost enough – and share sheets have made it a lot more pleasant to send a note to a colleague or collaborator.

Aside: The Miseries of Note Migration

There are some automated ways to pull data out of Evernote and into other note taking applications, including Apple Notes. But I’m not using them for two separate reasons.

First, I want to be able to re-curate all the stuff that’s collected in Evernote over then past years. So that means that I want to put my own eyes on old notes to determine what should and shouldn’t make the cut. I’ve shed about a thousand notes thus far and I’m pretty sure that even are going to vanish into the digital ether.

Second, the way I organized notes in Evernote changed over the years that I was using it. I did a lot of learning while using the application which mean that I changed my tagging and notebook structures a few times. That meant there was a pretty bad mess I’d built up and I wanted that cleaned up.

I should acknowledge that Evernote also put a lot of really badly formatted notes in my various notebooks and I’m spending more time than is really appropriate to fix up those notes. Specifically, I used the company’s web clipping tool on a regular basis and the way it clipped pages was often sub-par (to be generous). In some cases it meant that HTML was laced through notes. In others, the clipped pages were filled with ads and other badly formatted junk; this was the result of website publishers having to incorporate ads and ruin the user experience.

I should be blunt: I was working around the deficiencies of Evernote’s clipping service. Apple Notes has its own problems and deficiencies and, between the two, Evernote is actually better at clipping than Apple.

Limitations of Apple Notes

There’s still room for improvements with Apple Notes.

iOS is definitely an area that is still developing, and I periodically come across things that haven’t been implemented for some reason. One of the teething struggles associated with iOS’s Notes s linked with share sheets: why can I share a note with someone, but not a folder containing multiple notes? My use case is this: I often collect resources for ongoing projects in folders and it’d be great to be able to share all of those items, at once, as opposed to on an individual basis.

In a related vein, I’d be delightful to be able to:

  • Add hyperlinks to text in the Notes applications for iOS;
  • Create sub-folders in the iOS application (I can do it in macOS so why not in iOS?);
  • In macOS, automatically create a note when I drag a file — such as a .pdf, .doc, or other file — into the application.

I also really, really wish that Notes on iOS and macOS supported smart folders and tags. macOS already supports that kinds of functionality in Finder and (to an extent) iTunes and Photos! Adding these kinds of functions into the Notes application would mean I could more easily use the same note in multiple folders. The use case? I often keep reviews of articles and documents in Apple Notes and subsequently want to organize them into additional folders for specific papers that I’m writing or blog posts I’m drafting. As it stands now I need to make total copies of notes and re-create them in folders for the given paper or blog. That’s nuts: I shouldn’t be doubling or tripling notes.

But maybe it’s just too hard to do all that. So if I had to ask for a smaller thing it’d be this: please, please, please just let me pin important notes to the top of different folders in notes.

Finally, it’d be amazing if there was some integration of Markdown functionality. I don’t imagine that’s going to happen anytime soon, but it’d be nice.1 A better web clipping service would also be helpful: Evernote did a not good but generally serviceable if not good job of that and Notes just sucks in comparison.

NOTE: This was originally posted on Medium.


  1. 1: Yes, services like Bear might actually provide a better experience. And its support for Markdown makes it super tempting. But I’d rather pay for fewer services as part of some 2017 ‘financial cleaning’. ↩︎
Categories
Reviews

Review: Security Engineering

Anderson has successfully synthesized an incredibly diverse set of literature and, as a result, the book is useful for any person who is involved in security. The first section of the book outlines different threat models, offers accessible ways to develop and implement security designs, and also addresses issues of economics, psychology, and basic security issues that must be considered from the outset of security planning. Because different threat situations are raised throughout the book the reader will learn to appreciate the value of adopting comprehensive threat planning. This approach is not meant to drive a ‘secure everything’ mentality but to encourage readers to reflect on, and understand, what is actually being protected, why it is being protected, and what it is being protected from. As a result, a manager or team lead not invested in the day-to-day securing of a principle can have intelligent and critical discussions with their security staff, ensuring that principles are properly identified and resources assigned to ensure desired levels of threat protection. For staff involved in implementing policy, reading this first section may help to couch concerns in a language that is better understood by management. It will also let those same staff members more precisely plan and implement policies that are handed down from higher levels in an organizational framework. 

In the second section of the book, Anderson addresses a series of ‘topic areas’ such as multilateral security, banking and bookkeeping, monitoring and metering, security printing and seals, API attacks, copyright, telecom security, and more. In each section he leaves the reader with an excellent topical understanding of the historical issues these areas have encountered, how issues in various sections often relate to one another, and where and why errors in judgement have been made. The regular demonstrations of security failures – often due to side channel attacks – operate as powerful reminders that adequate policies that precisely identify how fault situations unfold are (arguably) amongst the most important elements of any security policy. It also demonstrates how what appear to be robust systems can be made to be quite brittle, thus emphasizing the need to think about how to develop effective defence in depth policies. This section is essential reading for both the actual implementers of security as well as whomever is making purchasing decisions on behalf of organizations. With the rapid growth of the ‘security industry’ and ever-increasing number of vendors that are invested in selling their latest products/snake oil, this section provides the reader with tools needed to critically interrogate products and make better purchasing and implementation decisions. 

The final section is, arguably, most needed by mid- to high-level organizational planners. Civil issues are raised – how does security/surveillance impact individuals’  rights? – as are step-by-step methodological systems for establishing threat patterns in relation to larger organizational concerns (e.g. profitability, consumer loyalty and trust). It also includes suggested practices for addressing potential security errors introduced in the generation of a digital or coded product, and how to establish an environment conducive to ensuring product- and process-based integrity, authenticity, and security. The final section is particularly needed for anyone looking into compliance seals and assurances. Anderson outlines the positive and deficient aspects of external audits, and also identifies how auditing systems have been gamed by nation-state actors and the reasons behind such gaming. While some organizations may be more concerned about receiving seals for bureaucratic purposes, for the agency that is concerned about the actual security value of the seals, this section provides much-needed resources to understand the nature of seal and certification systems. 

I cannot recommend this book highly enough. Quite often, security books will emphasize a particular line of attack and bypass the broader conceptual systems underlying the incursion. This book largely takes the opposite track, focusing first on the conceptual deficiencies and the intellectual demands of designing secure systems. It then proceeds to outline attacks that often use the systems’ logic to the attackers advantage. As a result, the reader will leave with a critical appreciation of the concepts and implementations of security. The emphasis on the conceptual conditions of security mean that the book will continue to age well, with readers being able to apply what is learned in this book to their work for years to come. 

Categories
Links Writing

Why We Need to Reevaluate How We Share Intelligence Data With Allies

Last week, Canadians learned that their foreign signals intelligence agency, the Communications Security Establishment (CSE), had improperly shared information with their American, Australian, British, and New Zealand counterparts (collectively referred to as the “Five Eyes”). The exposure was unintentional: Techniques that CSE had developed to de-identify metadata with Canadians’ personal information failed to keep Canadians anonymous when juxtaposed with allies’ re-identification capabilities. Canadians recognize the hazards of such exposures given that lax information-sharing protocols with US agencies which previously contributed to the mistaken rendition and subsequent torture of a Canadian citizen in 2002. 

Tamir Israel (of CIPPIC) and I wrote and article for Just Security following these revelations. We focused on the organization’s efforts, and failure, to suppress Canadians’ identity information that is collected as part of CSE’s ongoing intelligence activities and the broader implications of erroneous information sharing. Specifically, we focus on how such sharing can have dire life consequences for those who are inappropriately targeted as a result by Western allies and how such sharing has led to the torture of a Canadian citizen. We conclude by arguing that the collection and sharing of such information raises questions regarding the ongoing viability of the agency’s old-fashioned mandates that bifurcate Canadian and non-Canadian persons’ data in light of the integrated nature of contemporary communications systems and data exchanges with foreign partners.

Read the Article