Categories
Links Writing

Who Benefits from 5G?

The Financial Times (FT) ran a somewhat mixed piece on the future of 5G. The thesis is that telecom operators are anxious to realise the financial benefits of 5G deployments but, at the same time, these benefits were always expected to come in the forthcoming years; there was little, if any, expectation that financial benefits would happen immediately as the next-generation infrastructures were deployed.

The article correctly notes that consumers are skeptical of the benefits of 5G while, also, concluding by correctly stating that 5G was really always about the benefits that 5G Standalone will have for businesses. This is, frankly, a not great piece in terms of editing insofar as it combines two relatively distinct things without doing so in a particularly clear way.

5G Extended relies on existing 4G infrastructures. While there are theoretically faster speeds available to consumers, along with a tripartite spectrum band segmentation that can be used,1 most consumers won’t directly realise the benefits. One group that may, however, benefit (and that was not addressed at all in this piece) are rural customers. Opening up the lower-frequency spectrum blocks will allow 5G signals to travel farther with the benefit significantly accruing to those who cannot receive new copper, coax, or fibre lines. This said, I tend to agree with the article that most of the benefits of 5G haven’t, and won’t, be directly realised by individual mobile subscribers in the near future.2

5G Standalone is really where 5G will theoretically come alive. It’s, also, going to require a whole new way of designing and securing networks. At least as of a year or so ago, China was a global leader here but largely because they had comparatively poor 4G penetration and so had sought to leapfrog to 5G SA.3 This said, American bans on semiconductors to Chinese telecoms vendors, such as Huawei and ZTE, have definitely had a negative effect on the China’s ability to more fully deploy 5G SA.

In the Canadian case we can see investments by our major telecoms into 5G SA applications. Telus, Rogers, and Bell are all pouring money into technology clusters and universities. The goal isn’t to learn how much faster consumers’ phones or tablets can download data (though new algorithms to better manage/route/compress data are always under research) but, instead, to learn how how to take advantage of the more advanced business-to-business features of 5G. That’s where the money is, though the question will remain as to how well telecom carriers will be able to rent seek on those features when they already make money providing bandwidth and services to businesses paying for telecom products.


  1. Not all countries, however, are allocating the third, high-frequency, band on the basis that its utility remains in doubt. ↩︎
  2. Incidentally: it generally just takes a long, long time to deploy networks. 4G still isn’t reliably available across all of Canada, such as in populated rural parts of Canada. This delay meaningfully impedes the ability of farmers, as an example, to adopt smart technologies that would reduce the costs associated with farm and crop management and which could, simultaneously, enable more efficient crop yields. ↩︎
  3. Western telecoms, by comparison, want to extend the life of the capital assets they purchased/deployed around their 4G infrastructures and so prefer to go the 5G Extended route to start their 5G upgrade path. ↩︎
Categories
Aside Writing

The Future of How I Share Links

man wearing vr goggles
Photo by Harsch Shivam on Pexels.com

There’s a whole lot happening all over social media and this is giving me a chance to really assess what I use, for what reason, and what I want to publish into the future. I’ve walked away from enough social media services to recognize it might be time for another heavy adjustment in my life.

Twitter has long been key to my work and valuable in developing a professional profile. I don’t know that this kind of engagement will be quite the same moving forward. And, if I’m honest, a lot of my Twitter usage for the past several years has been to surface and circulate interesting (often cyber- or privacy-related) links or public conversations, or to do short-form analysis of important government documents ahead of writing about them on my professional website.

The issue is that the links on Twitter then fade into the digital ether. While I’ve been using Raindrop.io for a while and really love the service, it doesn’t have the same kind of broadcast quality as Twitter.1

So what to do going forward? In theory I’d like to get back into the habit of publishing more link blogs, here, about my personal interests because I really appreciate the ones that bloggers I follow and respect produce. I’m trying to figure out the format, frequency, and topics that makes sense; I suspect I might try to bundle 4-6 thematic links and publish them as a set, but time will tell. This would mean that sometimes there might be slightly busier and slower periods, depending on my ability to ‘see’ a theme.

The challenge is going to be creating a workflow that is fast, easy, and imposes minimal friction. Here, I’m hoping that a shortcut that takes the title and URL of an article, formats it into Markdown using Text Case, and then provides a bit of space to write will do the trick. This is the format I used to rely on to create my Roundup posts, though I don’t really expect I’ll be able to return to such length link blogs.

Update Nov 2023: I have really just leaned into sharing notable links using my through Raindrop.io RSS feed, especially as social media services have fragmented all around us.


  1. I have, nonetheless, created an RSS feed with mostly links to privacy, cyber, and national security articles. ↩︎
Categories
Links Writing

Generalist Policing Models Remain Problematic

From the New York Time’s opinion section, this piece on“Why the F.B.I. Is so far behind on cybercrime?” reinforces the position that American law enforcement is stymied in investigating cybercrimes because:

…it lacks enough agents with advanced computer skills. It has not recruited as many of these people as it needs, and those it has hired often don’t stay long. Its deeply ingrained cultural standards, some dating to the bureau’s first director, J. Edgar Hoover, have prevented it from getting the right talent.

Emblematic of an organization stuck in the past is the F.B.I.’s longstanding expectation that agents should be able to do “any job, anywhere.” While other global law enforcement agencies have snatched up computer scientists, the F.B.I. tried to turn existing agents with no computer backgrounds into digital specialists, clinging to the “any job” mantra. It may be possible to turn an agent whose background is in accounting into a first-rate gang investigator, but it’s a lot harder to turn that same agent into a top-flight computer scientist.

The “any job” mantra also hinders recruitment. People who have spent years becoming computer experts may have little interest in pivoting to another assignment. Many may lack the aptitude for — or feel uneasy with — traditional law enforcement expectations, such as being in top physical fitness, handling a deadly force scenario or even interacting with the public.

This very same issue plagues the RCMP, which also has a generalist model that discourages or hinders specialization. While we do see better business practices in, say, France, with an increasing LEA capacity to pursue cybercrime, we’re not yet seeing North American federal governments overhaul their own policing services.1

Similarly, the FBI is suffering from an ‘arrest’ culture:

The F.B.I.’s emphasis on arrests, which are especially hard to come by in ransomware cases, similarly reflects its outdated approach to cybercrime. In the bureau, prestige often springs from being a successful trial agent, working on cases that result in indictments and convictions that make the news. But ransomware cases, by their nature, are long and complex, with a low likelihood of arrest. Even when suspects are identified, arresting them is nearly impossible if they’re located in countries that don’t have extradition agreements with the United States.

In the Canadian context, not only is pursuing to arrest a problem due to jurisdiction, the complexity of cases can mean an officer spends huge amounts of time on a computer, and not out in the field ‘doing the work’ of their colleagues who are not cyber-focused. This perception of just ‘playing games’ or ‘surfing social media’ can sometimes lead to challenges between cyber investigators and older-school leaders.2 And, making things even more challenging is that the resources to train to detect and pursue Child Sexual Abuse Material (CSAM) are relatively plentiful, whereas economic and non-CSAM investigations tend to be severely under resourced.

Though there is some hope coming for Canadian investigators, by way of CLOUD agreements between the Canadian and American governments, and the updates to the Cybercrime Convention, both will require updates to criminal law as well as potentially provincial privacy laws to empower LEAs with expanded powers. And, even with access to more American data that enables investigations this will not solve the arrest challenges when criminals are operating out of non-extradition countries.

It remains to be seen whether an expanded capacity to issue warrants to American providers will reduce some of the Canadian need for specialized training to investigate more rudimentary cyber-related crimes or if, instead, it will have a minimum effect overall.


  1. This is also generally true to provincial and municipal services as well. ↩︎
  2. Fortunately this is a less common issue, today, than a decade ago. ↩︎
Categories
Photography Reviews Writing

Glass 365 Days Later

(Wintertime Rush by Christopher Parsons)

I’ve been actively using Glass for about a full year now. Glass is a photo sharing site where users must pay either a monthly or yearly fee; it costs to post but viewing is free.

I publish a photo almost every day and I regularly go through the community to view other folks’ photos and comment on them. In this short review I want to identify what’s great about the service, what’s so-so, and where there’s still room to grow. All the images in this blog post were previously posted to Glass.

Let me cut to the chase: I like the service and have resubscribed for another full year.

The Good

The iOS mobile client was great at launch and it remains terrific. It’s fast and easy to use, and beats all the other social platforms’ apps that I’ve used because it is so simple and functional. You can’t edit your images in the Glass app and I’m entirely fine with that.

(Fix, Found by Christopher Parsons)

The community is delightful from my perspective. The comments I get are all thoughtful and the requirement to pay-to-post means that there aren’t (yet) any trolls that I’ve come across. Does this mean the community is smaller? Definitely. But is it a more committed and friendly community? You bet. Give me quality over quantity any day of the week.

All subscribers have the option to have a public facing profile, which anyone can view, or ones that are restricted to just other subscribers. I find the public profiles to be pretty attractive and good at arranging photos, especially when accessing a profile on a wide-screen device (e.g. a laptop, desktop, tablet, or phone in landscape).

The platform launched as iPhone only, to start, though has been expanding since then. The iPad client is a joy to use and the developers have an Android client on their roadmap. A Windows application is available and you can use the service on the web too.

(Birthday Pose by Christopher Parsons)

Other things that I really appreciate: Glass has a terrifically responsive development team. There are about 50 community requests that have been implemented since launch; while some are just for bugs, most are for updates to the platform. Glass is also the opposite of the traditional roach-motel social media platform. You can download your photos from the site at any time; you’re paying for the service, not for surveillance. That’s great!

The So-So

So is Glass perfect then? No. It has only a small handful of developers as compared to competitors like Instagram or Vero which means that some overdue features are still in development.

(‘Til Pandemic Does Us Part by Christopher Parsons)

A core critique is there is no Android application. That’s fair! However, iOS users are more likely to spend money on apps so it made economic sense to prioritize that user base.1 Fortunately an Android application is on its way and a Windows version was recently released.

A more serious issue for existing users is an inability to ‘tag’ photos. While photos can be assigned to categories in the application (and more categories have been added over time) that means it’s hard to have the customization of bigger sites like Flickr. The result is that discovery is more challenging and it’s harder to build up a set of metadata that could be used in the future for presenting photos. Glass, currently, is meant to provide a linear feed of photos—that’s part of its charm!—but more sophisticated methods of even displaying images on users’ portfolios in the future may require the company to adopt a tagging system. Why does it matter that there is or isn’t one, today? Because for heavier users2 re-viewing and tagging all photos will be a royal pain in the butt, if that ever is something that is integrated into the platform.

(Tall and Proud by Christopher Parsons)

If you’re looking to use Glass as a formal portfolio, well, there are almost certainly better services and platforms you should rely upon. Which is to say: the platform does not let you create albums or pin certain photos to the top of your profile. I entirely get that the developers are aiming for a simple service at launch, but would also appreciate the ability to better categorize some of my photos. In particular, I would like to create things such as:

  • Best of a given year
  • Having albums that break up street versus landscape versus cityscape images
  • Being able to create albums for specific events, such as particular vacations or documentary events
  • Photos that I generally think are amongst my ‘best’ overall

This being said, albums and portfolios are in the planning stages. I look forward to seeing what is ultimately released.

(Public Praise by Christopher Parsons)

As much as I like the community as it stands today, I would really like the developers to add some small or basic things. Like threaded comments. They’re coming, at some point, after discovery features are integrated (e.g., search by location, by camera, etc.). Still, as it stands today, the lack of even 2-levels of threaded comments means that active conversations are annoying to follow.

Finally, Glass is really what you make of it. If you’re a photographer who wants to just add photos and never engage with the community then I’d imagine it’s not as good as a platform such as Instagram or Vero. Both of the latter apps have larger user bases and you’re more likely to get the equivalent of a like; I don’t know how large Glass’ user-base is but it’s not huge despite being much larger than at launch. However, if you’re active in the community then I think that you can get more positive, or helpful, feedback than on other platforms. At least for me, as a very enthusiastic amateur photographer, the engagement I get on Glass is remarkably more meaningful than on any other platform on which I’ve shared my photographs.

The Bad

Honestly, the worst part about Glass is still discoverability.3 You can see a semi-random set of photographers using the service which isn’t bad…except that some of them may not have posted anything to the platform for months or even a year. I have no idea why this is the case.

(Stephanie by Christopher Parsons)

The only other way to discover other photographers is to regularly dig through the different photography categories, and ‘appreciate’4 photos you see and follow the photographers who appeal to your tastes. This isn’t terrible, but it’s the ‘best’ way of discovering photos and really isn’t great. While the company ‘highlightsphotographers on the Glass website and through its Twitter feed, the equivalent curation still doesn’t exist in the application itself. That’s non-ideal.

The developers have promised that additional discovery functions will be rolling out. They intend enable search by camera type or location, but thus far nothing’s been released. They’ve been good at slowly and deliberately releasing features, and new features have always been thoughtful when implemented, so I’m hopeful that when discoverability is updated it’ll be pretty good. Until then, however, it’s frankly pretty bad.

(Lonely Traveller by Christopher Parsons)

If I were to find a second thing that’s missing, to date, it would be that there’s no way of embedding Glass images in other CMSes. The platform does support RSS, which I appreciate, but I want the platform to offer full-on embeds so I can easily cross post images to other web spaces (like this blog!). Embeds could, also, have some language/links that ultimately let viewers sign up for the service as a way of growing the subscriber base.

The third thing that I wish Glass would enable a way of assessing if a photo has already been uploaded. At this point I’ve uploaded over 300 photos and I want to ensure that I don’t accidentally upload a duplicate. This is definitely a problem associated with those who use the service more heavily, but will become a more prominent issue as users ‘live’ on the platform for more and more years.

Conclusion

So, at the end of a year, what do I think of Glass?

First, I think that it truly is a photography community for photographers. It isn’t trying to be a broader social network that lets you share what music you’re listening to, or TV shows and movies you’re watching, or books you’ve finished, or temporary stories or images. There is totally a space for a network like that but it’s not Glass and I’m fine with it being a simpler and more direct kind of platform.

(Night Light by Christopher Parsons)

Second, it is a platform with active developers and a friendly community. Both of those things are pretty great. And the developers have a clear and opinionated sense of taste: they’re creating a beautiful application and associated service. There’s real value in the aesthetic for me.

Third, it’s not quite the place to showcase your work, today, if you are trying to semi-professionally market your photography. There are no albums or other ways of highlighting or collecting your images. Glass is much closer to the original version of Instagram in just presenting a feed of historical images instead of a contemporary service like Flickr or even Instagram. And…that’s actually a pretty great thing! That said, the roadmap includes commitments to enabling better highlighting/collecting of images. This will be increasingly important as more people upload more photographs to the service.

(Supervisory Assistance by Christopher Parsons)

Fourth, it’s still relatively cheap as compared to other paid offerings. It is less than half the cost of a Flickr Pro account, as just one example. And there are no ads for subscribers or for individuals who are browsing public profiles and associated portfolios.

(Distressed by Christopher Parsons)

So, in conclusion, I’d strongly suggest trying out Glass if you’re a committed and enthusiastic amateur. It’s not the same as Instagram or Instagram clones. That’s both part of the point and part of the magic of the platform that the Glass team is creating and incubating.


  1. Yes, you might be willing to pay money, dear reader, but you’re statistically deviant. In a good way! ↩︎
  2. Such as myself… ↩︎
  3. The developers are, also, very well aware of this issue. ↩︎
  4. Glass does not have ‘likes’ per se, but lets users click an ‘appreciation’ button. Appreciations are only ever sent to the photographer and are not accumulated numerically to be presented to either the public or the photographer who uploaded the photograph. ↩︎
Categories
Links Writing

National Security Means What, Again?

There have been any number of concerns about Elon Musk’s behaviour, and especially in the recent weeks and months. This has led some commentators to warn that his purchase of Twitter may raise national security risks. Gill and Lehrich try to make this argument in their article, “Elon Musk Owning Twitter is A National Security Threat.” They give three reasons:

First, Musk is allegedly in communication with foreign actors – including senior officials in the Kremlin and Chinese Communist Party – who could use his acquisition of Twitter to undermine American national security.

Will Musk’s foreign investors have influence over Twitter’s content moderation policies? Will the Chinese exploit their significant leverage over Musk to demand he censor criticism of the CCP, or turn the dials up for posts that sow distrust in democracy?

Finally, it’s not just America’s information ecosystem that’s at stake, it’s also the private data of American citizens.

It’s worth noting that at no point do the authors provide a definition for ‘national security’, which causes the reader to have to guess what they likely mean. More broadly, in journalistic and opinion circle communities there is a curious–and increasingly common–conjoining of national security and information security. The authors themselves make this link in the kicker paragraph of their article, when they write

It is imperative that American leaders fully understand Musk’s motives, financing, and loyalties amidst his bid to acquire Twitter – especially given the high-stakes geopolitical reality we are living in now. The fate of American national security and our information ecosystem hang in the balance.1

Information security, generally, is focused on dangers which are associated with true or false information being disseminated across a population. It is distinguished from cyber security, and which is typically focused on the digital security protocols and practices that are designed to reduce technical computer vulnerabilities. Whereas the former focuses on a public’s mind the latter attends to how their digital and physical systems are hardened from being technically exploited.

Western governments have historically resisted authoritarian governments attempts to link the concepts of information security and cyber security. The reason is that authoritarian governments want to establish international principles and norms, whereby it becomes appropriate for governments to control the information which is made available to their publics under the guise of promoting ‘cyber security’. Democratic countries that emphasise the importance of intellectual freedom, freedom of religion, freedom of assembly, and other core rights have historically been opposed to promoting information security norms.

At the same time, misinformation and disinformation have become increasingly popular areas of study and commentary, especially following Donald Trump’s election as POTUS. And, in countries like the United States, Trump’s adoption of lies and misinformation was often cast as a national security issue: correct information should be communicated, and efforts to intentionally communicate false information should be blocked, prohibited, or prevented from massively circulating.

Obviously Trump’s language, actions, and behaviours were incredibly destabilising and abominable for an American president. And his presence on the world stage arguably emboldened many authoritarians around the world. But there is a real risk in using terms like ‘national security’ without definition, especially when the application of ‘national security’ starts to stray into the domain of what could be considered information security. Specifically, as everything becomes ‘national security’ it is possible for authoritarian governments to adopt the language of Western governments and intellectuals, and assert that they too are focused on ‘national security’ whereas, in fact, these authoritarian governments are using the term to justify their own censorious activities.

Now, does this mean that if we are more careful in the West about our use of language that authoritarian governments will become less censorious? No. But being more careful and thoughtful in our language, public argumentation, and positioning of our policy statements we may at least prevent those authoritarian governments from using our discourse as a justification for their own activities. We should, then, be careful and precise in what we say to avoid giving a fig leaf of cover to authoritarian activities.

And that will start by parties who use terms like ‘national security’ clearly defining what they mean, such that it is clear how national security is different from informational security. Unless, of course, authors and thinkers are in fact leaning into the conceptual apparatus of repressive governments in an effort to save democratic governance. For any author who thinks such a move is wise, however, I must admit that I harbour strong doubts of the efficacy or utility of such attempts.


  1. Emphasis not in original. ↩︎
Categories
Links Writing

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities?

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Categories
Links Writing

Tech for Whom?

Charley Johnson has a good line of questions and critique for any organization or group which is promoting a ‘technology for good’ program. The crux is that any and all techno-utopian proposals suggest a means of technology to solve a problem as defined by the party making the proposal. Put another way, these kinds of solutions do not tend to solve real underlying problems but, instead, solve the ‘problems’ for which hucksters have build a pre-designed a ‘solution’.

This line of analysis isn’t new, per se, and follows in a long line of equity, social justice, feminism, and critical theory writers. Still, Johnson does a good job in extracting key issues with techno-utopianism. Key, is that any of these solutions tend to present a ‘tech for good’ mindset that:

… frames the problem in such a way that launders the interests, expertise, and beliefs of technologists…‘For good’ is problematic because it’s self-justifying. How can I question or critique the technology if it’s ‘for good’? But more importantly, nine times out of ten ‘for good’ leads to the definition of a problem that requires a technology solution.

One of the things that we are seeing more commonly is the use of data, in and of itself, as something that can be used for good: data for good initiatives are cast as being critical to solving climate change, making driving safer, or automating away the messier parties of our lives. Some of these arguments are almost certainly even right! However, the proposed solutions tend to rely on collecting, using, or disclosing data—derived from individuals’ and communities’ activities—without obtaining their informed, meaningful, and ongoing consent. ‘Data for good’ depends, first and often foremost, on removing the agency to say ‘yes’ or ‘no’ to a given ‘solution’.

In the Canadian context efforts to enable ‘good’ uses of data have emerged through successively introduced pieces of commercial privacy legislation. The legislation would permit the disclosure of de-identified personal information for “socially beneficial purposes.” Information could be disclosed to government, universities, public libraries, health care institutions, organizations mandated by the government to carry out a socially beneficial purpose, and other prescribed entities. Those organizations could use the data for a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

Put slightly differently, whereas Johnson’s analysis is towards a broad concept of ‘data for good’ in tandem with elucidating examples, the Canadian context threatens to see broad-based techno-utopian uses of data enabled at the legislative level. The legislation includes the ability to expand whom can receive de-identified data and the range of socially beneficial uses, with new parties and uses being defined by regulation. While there are a number of problems with these kinds of approaches—which include the explicit removal of consent of individuals and communities to having their data used in ways they may actively disapprove of—at their core the problems are associated with power: the power of some actors to unilaterally make non-democratic decisions that will affect other persons or communities.

This capacity to invisibly express power over others is the crux of most utopian fantasies. In such fantasies, power relationships are resolved in the absence of making them explicit and, in the process, an imaginary is created wherein social ills are fixed as a result of power having been hidden away. Decision making in a utopia is smooth and efficient, and the power asymmetries which enable such situations is either hidden away or just not substantively discussed.

Johnson’s article concludes with a series of questions that act to re-surface issues of power vis-a-vis explicitly raising questions of agency and the origin and nature of the envisioned problem(s) and solution(s):

Does the tool increase the self-determination and agency of the poor?

Would the tool be tolerated if it was targeted at non-poor people?

What problem does the tool purport to solve and who defined that problem?

How does the way they frame the problem shape our understanding of it?

What might the one framing the problem gain from solving it?

We can look to these questions as, at their core, raising issues of power—who is involved in determining how agency is expressed, who has decision-making capabilities in defining problems and solutions—and, through them, issues of inclusion and equity. Implicit through his writing, at least to my eye, is that these decisions cannot be assigned to individuals but to individuals and their communities.

One of the great challenges for modern democratic rule making is that we must transition from imagining political actors as rational, atomic, subjects to ones that are seen as embedded in their community. Individuals are formed by their communities, and vice versa, simultaneously. This means that we need to move away from traditional liberal or communitarian tropes to recognize the phenomenology of living in society, alone and together simultaneously, while also recognizing and valuing the tilting power and influence of ‘non-rational’ aspects of life that give life much of its meaning and substance. These elements of life are most commonly those demonized or denigrated by techno-utopians on the basis that technology is ‘rational’ and is juxtaposed against the ‘irrationality’ of how humans actually live and operate in the world.

Broad and in conclusion, then, techno-utopianism is functionally an issue of power and domination. We see ‘tech bros’ and traditional power brokers alike advancing solutions to their perceived problems, and this approach may be further reified should legislation be passed to embed this conceptual framework more deeply into democratic nation-states. What is under-appreciated is that while such legislative efforts may make certain techno-utopian activities lawful the subsequent actions will not, as a result, necessarily be regarded as legitimate by those affected by the lawful ‘socially beneficial’ uses of de-identified personal data.

The result? At best, ambivalence that reflects the population’s existing alienation from democratic structures of government. More likely, however, is that lawful but illegitimate expressions of ‘socially beneficial’ uses of data will further delegitimize the actions and capabilities of the states, with the effect of further weakening the perceived inclusivity of our democratic traditions.

Categories
Reviews Solved Writing

So You Can’t Verify Your Apple iCloud Custom Domain

Photo by Tim Gouw on Pexels.com

When you set up a custom iCloud email domain you have to modify the DNS records held by your domain’s registrar. On the whole, the information provided by Apple is simple and makes it easy to set up the custom domain.

However, if you change where your domain’s name servers point, such as when you modify the hosting for a website associated with the domain, you must update the DNS records with whomever you are pointing the name servers to. Put differently: if you have configured your Apple iCloud custom email by modifying the DNS information at host X, as soon as you shift to host Y by pointing your name servers at them you will also have to update DNS records with host Y.

Now, what if you don’t do this? Eventually as DNS information propagates over the subsequent 6-72 hours you’ll be in a situation where your custom iCloud domain email address will stop sending or receiving information because the routing information is no longer valid. This will cause Apple’s iCloud custom domain system to try and re-verify the domain; it will do this because the DNS information you initially supplied is no longer valid.

Should you run into this issue you might, naturally, first reach out to Apple support. You are, after all, running your email through their servers.

Positively: you will very quickly get a real-live human on the phone to help you. That’s great! Unfortunately, however, there is very little that Apple’s support staff can do to help you. There are very, very few internal help documents pertaining to custom domains. As was explained to me, the sensitivity and complexity of DNS (and the fact that information is non-standardized across registrars) means that the support staff really can’t help much: you’re mostly on your own. This is not communicated when setting up Apple custom email domains.

In a truly worst case scenario you might get a well meaning but ignorant support member who leads you deeply astray in attempting to help troubleshoot and fix the problem. This, unfortunately, was my experience: no matter what is suggested, the solution to this problem is not solved by deleting your custom email accounts hosted by Apple on iCloud. Don’t be convinced this is ever a solution.

Worse, after deleting the email accounts associated with your custom iCloud domain email you can get into a situation where you cannot click the re-verify button on the front end of iCloud’s custom email domain interface. The result is that while you see one thing on the graphical interface—a greyed out option to ‘re-verify’—folks at Apple/server-side do not see the same status. Level 1 and 2 support staff cannot help you at this stage.

As a result, you can (at this point) be in limbo insofar as email cannot be sent or received from your custom domain. Individuals who send you message will get errors that the email identify no longer exists. The only group at Apple who can help you, in this situation, are Apple’s engineering team.

That team apparently does not work weekends.

What does this mean for using custom email domains for iCloud? For many people not a lot: they aren’t moving their hosting around and so it’s very much a ‘set and forget’ situation. However, for anyone who does have an issue the Apple support staff lacks good documentation to determine where the problem lies and, as a result, can (frankly) waste an inordinate amount of time in trying to figure out what is wrong. I would hasten to note that the final Apple support member I worked with, Derek, was amazing in identifying what the issue was, communicating the challenges facing Apple internally, and taking ownership of the problem: Derek rocks. Apple support needs more people like him.

But, in the absence of being able to hire more Dereks, Apple needs better scripts to help their support staff assist users. And, moreover, the fact that Apple lacks a large enough engineering team to also have some people working weekends to solve issues is stunning: yes, hiring is challenging and expensive, but Apple is one of the most profitable companies in the world. Their lack of a true 24/7 support staff is absurd.

What’s the solution if you ever find yourself in this situation, then? Make sure that you’ve done what you can with your new domain settings and, then, just sit back and wait while Apple tries to figure stuff out. I don’t know how, exactly, Apple fixed this problem on their end, though when it is fixed you’ll get an immediate prompt on your iOS devices that you need to update your custom domain information. It’s quick to take the information provided (which will include a new DKIM record that is unique to your new domain) and then get Apple custom iCloud email working with whomever is managing your DNS records.

Ultimately, I’m glad this was fixed for me but, simultaneously, the ability of most of Apple’s support team to provide assistance was minimal. And it meant that for 3-4 days I was entirely without my primary email address, during a busy work period. I’m very, very disappointed in how this was handled irrespective of things ultimately working once again. At a minimum, Apple needs to update its internal scripts so that their frontline staff know the right questions to ask (e.g., did you change information about your website’s DNS information?) to get stuff moving in the right direction.

Categories
Links Writing

Vulnerability Exploitability eXchange (VEX)

CISA has a neat bit of work they recently published, entitled “Vulnerability Exploitability eXchange (VEX) – Status Justifications” (warning: opens to .pdf.).1 Product security teams that adopt VEX could assert the status of specific vulnerabilities in their products. As a result, clients’ security staff could allocate time to remediate actionable vulnerabilities instead of burning time on potential vulnerabilities that product security teams have already closed off or mitigated.

There are a number of different machine-readable status types that are envisioned, including:

  • Component_not_present
  • Vulnerable_code_not_present
  • Vulnerable_code_cannot_be_controlled_by_adversary
  • Vulnerable_code_not_in_execute_path
  • Inline_mitigations_already_exist

CISA’s publication spells out what each status entails in more depth and includes diagrams to help readers understand what is envisioned. However, those same readers need to pay attention to a key caveat, namely, “[t]his document will not address chained attacks involving future or unknown risks as it will be considered out of scope.” Put another way, VEX is used to assess known vulnerabilities and attacks. It should not be relied upon to predict potential threats based on not-yet-public attacks nor new ways of chaining known vulnerabilities. Thus, while it would be useful to ascertain if a product is vulnerable to EternalBlue, today, it would not be useful to predict or assess the exploited vulnerabilities prior to EternalBlue having been made public nor new or novel ways of exploiting the vulnerabilities underlying EternalBlue. In effect, then, VEX is meant to address the known risks associated with N-Days as opposed to risks linked with 0-Days or novel ways of exploiting N-Days.2

For VEX to best work there should be some kind of surrounding policy requirements, such as when/if a supplier falsely (as opposed to incorrectly) asserts the security properties of its product there should be some disciplinary response. This can take many forms and perhaps the easiest relies on economics and not criminal sanction: federal governments or major companies will decline to do business with a vendor found to have issued a deceptive VEX, and may have financial recourse based on contactual terms with the product’s vendor. When or if this economic solution fails then it might be time to turn to legal venues and, if existent approaches prove insufficient, potentially even introduce new legislation designed to further discipline bad actors. However, as should be apparent, there isn’t a demonstrable requirement to introduce legislation to make VEX actionable.

I think that VEX continues work under the current American administration to advance a number of good policies that are meant to better secure products and systems. VEX works hand-in-hand with SBOMs and, also, may be supported by US Executive Orders around cybersecurity.

While Canada may be ‘behind’ the United States we can see that things are potentially shifting. There is currently a consultation underway to regenerate Canada’s cybersecurity strategy and infrastructure security legislation was introduced just prior to Parliament rising for its summer break. Perhaps, in a year’s time, we’ll see stronger and bolder efforts by the Canadian government to enhance infrastructure security with some small element of that recommending the adoption of VEXes. At the very least the government won’t be able to say they lack the legislative tools or strategic direction to do so.


  1. You can access a locally hosted version if the CISA link fails. ↩︎
  2. For a nice discussion of why N-days are regularly more dangerous then 0-Days, see: “N-Days: The Overlooked Cyber Threat for Utilities.” ↩︎
Categories
Photography Writing

Thoughts on Developing My Street Photography

(Dead Ends by Christopher Parsons)

For the past several years I’ve created a ‘best of’ album that summarizes the year’s best photos that I made. I use the yearly album to assess how my photography has changed and what, if any, changes are common across those images. The process of making these albums and then printing them forces me to look at my images, how they work against one another, and better understand what I learned over the course of taking photos for a year.

I have lots of favourite photographs but what I’ve learned the most, at least over the past few years, is to ignore a lot of the information and ‘tips’ that are often shared about street photography. Note that the reason to avoid ignore them is not because they are wrong per se, or that photographers shouldn’t adopt them, but because they don’t work for how I prefer to engage in street photography.

I Don’t Do ‘Stealth’ Photography

Probably the key tip that I generally set to the side is that you should be stealthy, sneaky, or otherwise hidden from the subjects in the photos that I capture. It’s pretty common for me to see a scene and wait with my camera to my eye until the right subjects enter the scene and are positioned where I want them in my frame. Sometimes that means that people will avoid me and the scene and other times they’ll clearly indicate that they don’t want to have their photo taken. In these cases the subject is communicating their preferences quite clearly and I won’t take their photograph. It’s just an ethical line I don’t want to cross.

(Winter Troop by Christopher Parsons)

In yet other instances, my subjects will be looking right at me as they pass through the scene. They’re often somewhat curious. And in many situations they stop and ask me what I’m taking photos of, and then a short conversation follows. In an odd handful of situations they’ve asked me to send along an image I captured of them or a link to my photos; to date, I’ve had pretty few ‘bad’ encounters while shooting on the streets.

I Don’t Imitate Others

I’ve spent a lot of time learning about classic photographers over the past couple years. I’ve been particularly drawn to black and white street photography, in part because I think it often has a timeless character and because it forces me to more carefully think about positioning a subject so they stand out.

(Working Man by Christopher Parsons)

This being said, I don’t think that I’m directly imitating anyone else. I shoot with a set of focal ranges and periodically mix up the device I’m capturing images on; last year, a bulk of my favourite photos came from an intensive two week photography vacation where I forced myself to walk extensively and just use an iPhone 12 Pro. Photos that I’m taking, this year, have largely been with a Fuji X100F and some custom jpg recipes that generally produce results that I appreciate.

Don’t get me wrong: in seeing some of the photos of the greats (and less greats and less well-knows) I draw inspiration from the kinds of images they make, but I don’t think I’ve ever gone out to try and make images like theirs. This differs from when I started taking shots in my city, and when I wanted to make images that looked similar to the ‘popular’ shots I was seeing. I still appreciate those images but they’re not what I want to make these days.

I Create For Myself

While I don’t think that I’m alone in this, the images that I make are principally for myself. I share some of those images but, really, I just want to get out and walk through my environment. I find the process of slowing down to look for instances of interest and beauty help ground me.

Because I tend to walk within the same 10-15km radius of my home, I have a pretty good sense of how neighbourhoods are changing. I can see my city changing on a week to week basis, and feel more in tune with what’s really happening based on my observations. My photography makes me very present in my surroundings.

(Dark Sides by Christopher Parsons)

I also tend to use my walks to both cover new ground and, also, go into back alleys, behind sheds, and generally in the corners of the city that are less apparent unless you’re looking for them. Much of the time there’s nothing particularly interesting to photograph in those spaces. But, sometimes, something novel or unique emerges.

Change Is Normal

For the past year or so, a large volume (95% or more) of my images have been black and white. That hasn’t always been the case! But I decided I wanted to lean into this mode of capturing images to develop a particular set of skills and get used to seeing—and visualizing—scenes and subjects monochromatically.

But my focus on black and white images, as well as images that predominantly include human subjects, is relatively new: if I look at my images from just a few years ago there was a lot of colour and stark, or empty, cityscapes. I don’t dislike those images and, in fact, several remain amongst my favourite images I’ve made to date. But I also don’t want to be constrained by one way of looking at the world. The world is too multifaceted, and there’s too many ways of imagining it, to be stuck permanently in one way of capturing it.

(Alley Figures by Christopher Parsons)

This said, over time, I’d like to imagine I might develop a way of seeing the world and capturing images that provides a common visual language across my images. Though if that never happens I’m ok with that, so long as the very practice of photography continues to provide the dividends of better understanding my surroundings and feeling in tune with wherever I’m living at the time.