What the U.S. election taught us about technology, and our attempts to regulate it

 

12 November 2020 (Chania, Crete) – There is a common argument that representative democracy has survived the rise and fall of empires but many digital technologies don’t adhere to the laws, systems or norms that traditionally underpin representative democracies. For a democracy to work reasonably well, citizens have to have the ability to know what they really think about things, to reflect carefully on complex decisions and to engage constructively and meaningfully with each other to find compromises over situations.

That’s an idealistic vision of how that should work but the argument continues that digital technology does not help us to achieve that at all because it’s all based on immediate, emotional, aggressive, snappy, short messaging. The whole “logic” of how we now communicate is not one that a representative democracy is based on, and technology is increasingly causing a disconnect between our traditional democratic system and our modern lifestyles. And so, “technology must be regulated!!

The narrative used to be so simple. Technology would allow individuals to be healthier, wealthier, more tightly connected to one another, better informed and more able to communicate, share information and collaborate. It was a promise of greater equality, greater efficiency, greater speed and greater creativity – all enabled by technologies like three-dimensional printers, ubiquitous computing and advances like virtual reality.

But things changed. We were thrown into a pessimistic future: a tiered system of information haves and have-nots; a growth in new and better technologies of global surveillance; a system where rights appear to belong to information and those who collect it rather than to individuals, amidst wider projects of ad hoc geo-engineering and the global diffusion of advanced weapons and other forms of technologically enabled instability. 

It was right there in our faces: technological development (although not all of it) that contributed to consumer demand in increasingly sophisticated ways. We never looked at (or maybe just ignored) the societal consequences or the potential downsides. We saw tech solely as consumers, rather than as citizens, and we should have been applying the same civic scrutiny that we would bring to any other form of power.

Technology drives change. And, by definition, change can turn the world upside down. And so, in the end, technology, as a bringer of change, is about politics. Because politics is about who gets what, from whom, under what conditions, and for what purpose. It is why in my monograph on regulating technology in the modern world I say the regulatory state must be examined through the lens of the reconstruction of the political economy: the ongoing shift from an industrial mode of development to an informational one.

To those who know history this is no revelation. Every time we’ve had a technological change we’ve had both social and political change. Karl Marx was but one of a few who has pointed out that the West’s socio-economic system, and therefore our politics, is determined by and derived from the mode of industry, the mode of production. That was the case with the Industrial Revolution when we replaced muscle power with artificial power. And it is the case with the informational revolution. It is why we hear the ever louder and louder cry of “WE MUST REGULATE TECH!!” 

All of this was at the forefront of the recent U.S. elections (in fact accelerated over the last 5 years). This election period offers many lessons about technology, and about its regulation. Some are very obvious and easy, and some more complex. I cannot review all of it but let’s take a look at what I think were the main take-aways.

Let’s start with the easy ones first.

 

Statutes may be passed by California voters directly through ballot measures in elections. According to Loyola Law School in Los Angeles, no state uses them more than California. There were two major technology issues on the ballot this year.

Privacy. California voters passed a new online privacy law as a ballot initiative, strengthening the California Consumer Privacy Act (CCPA) which was passed by the legislature last year, tweaking a few rules and bringing it close to the EU’s General Data Protection Regulations (GDPR). It also sets up a dedicated privacy enforcement agency, which California (and the U.S.) previously lacked. Two observations:

• Online tracking and privacy are real problems, but the CPA, like the GDPR, is vague and hard to comply with (and it’s hard to know if you’re complying), and conflates real issues with basic parts of how the web works, and tends to help Google and Facebook at the expense of small publishers. It does not come close to what is needed (or what was “sold” to legislators). But, no worries for you data protection lawyers and data protection vendors. It will cost U.S. companies $55 billion to comply with the CCPA. So go out there and sell, sell, sell your “solutions”.

• Extra-territoriality: like the GDPR, most publishers will have to follow these rules everywhere (and certainly across the whole U.S.), and there is a push towards complying with the “highest common denominator” (the inverse of normal regulatory arbitrage). California is acting as the U.S. regulator by default. Oh, there are a number of loopholes that companies can wiggle through but we’ll discuss that in another post.

The Los Angeles Times has a pretty good summary of the ballot change here, and you can read the text of the law changes here.

On-demand drivers. Last year California’s legislature passed “AB5”, a law that aimed to class Uber & Lyft drivers as employees but was so ineptly drafted that instead it effectively banned all freelance work. Yes, really. There was lots of criticism. You can read one story about it here. That was followed by a wave of amendments to carve out everything except Uber and Lyft, plus a flurry of court cases.

Now voters have passed a new law by ballot initiative that throws the whole thing out, bans the state government from doing this again, and instead sets a range of regulations for on-demand driving, with minimum wages, defined operating practices and mandated health benefits, amongst other things. This is locked in place with a clause that any amendments by the state legislature require a 7/8 majority (the privacy law noted above, passed at the same time, only needs a simple majority for changes). This is a slap in the face for the politicians who caused the mess (and a massive own-goal for the people behind AB5), but baking in regulations like this will not end well either.

The gig economy spans beyond ride-sharing and food delivery and encompasses a vast array of professional services, however a standard definition for the gig economy does not yet exist. As a result, gig workers are defined by a plethora of names and various distinctions for full-time or part-time workers.

Earlier today I wrote a separate post and I got into the details of what this ballot decision means: the ripple effects and the brutal truths ignored which you can read here. For the text of the law click here.

 

Misinformation, Disinformation and Malinformation 

 

While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, we have been numbed by the speed, reach and low cost of online communication that has propelled misinformation, disinformation and malinformation to exponential levels … plus the continuously emerging innovations that magnify MisDisMal to ever higher threat levels. We have finally realized it is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully. The trustworthiness of our information environment will decrease over the next decade because:

1. It is inexpensive and easy for bad actors to act badly;

2. Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem; and

3. Real solutions based on actual trusted relationships will take time to evolve – likely more than a decade+, if it does

This makes all of us … all of us … vulnerable to accepting and acting on misinformation.

NOTE: for this post, I will stick to general themes. But in a later post I will jump into the complex tech and processes involved here and do a deeper dive.

For instance, researchers at Carnegie Mellon University, MIT Media Lab and Serico analysed ~300 million Tweets since April 2020 and found that about 31% were sent by accounts that behave more like computerized robots than humans. There is some tricky methodology involved in this kind of analysis and the bottomline is that identifying such activity is hardly binary. Various tools, while indicative, should not be considered as the single source of truth. I’ve seen many cases where accounts that tweet more than humanly possible and appear to come from different countries turned out to clearly be run by people who care deeply about a particular issue. And media coverage (unsurprisingly) blurs the line between actual “bots” (automated accounts), inauthentic accounts (accounts purporting to represent real humans but not doing so), and the accounts of real activists, who often engage well outside expected norms. I have been fortunate because my personal contacts and my LinkedIn community have helped me to see how the tubes, wires, plumbing and connections work in manipulating social media … and understand the amount of power these private players have amassed. 

Content moderation is hard. It’s not that Facebook and Twitter did not try this election period which I will discuss in detail below. But first …

 

 

Every time there’s another big new story about harmful content on a social platform and another flurry of questioning why that platform didn’t find it and stop it, I think back to viruses and malware, and how virus scanners and security software were not the answer. Harmful content generated by users, even fake users, is inherently scalable, not because it’s harmful but because it’s content. It’s people + the internet. Scanning, filtering and moderating harmful content is inherently unscalable and unrepeatable. The Internet removes all gatekeepers, filters and moderating influences, and it lets any two people on Earth with the same idea find each other and tell themselves they’re right. That not a bug or a feature. It’s what the internet is.

The idea you can “solve” this by beating up Facebook or with more moderation and software is like all those 19th century monarchies that thought they could fend off political change by closing a university every now and then and hiring more secret police. The solution to the malware explosion of 20 years ago was not more scanning, or shouting at Microsoft to fix it, but changing the software model. We went to sandboxes – to iOS and Chrome OS – and we went to the cloud. Albeit this issue is much more tricky because some seriously bad DNA is in the bucket.

The stupid (and far too widespread) criticism of Facebook is that solving this is much easier than Facebook says. The worrying criticism, which its right, is that it is much harder than Facebook thinks. The striking thing about all these stories is not that Facebook isn’t on top of fake news in the U.S. or in India or in Honduras or in Burma (also called Myanmar, the name military authorities adopted in 1989, but many countries refused to adopt the name change) or [choose your country] but rather the complete insanity of one company in Menlo Park, CA even trying to be on top of every level of political discourse in every country on Earth.

NOTE: It is similar to these insane conversations I have with “experts” who think data privacy is actually possible. Data is in perpetual motion, swirling around us, only sometimes touching down long enough for us to make any sense of them. We use data, these numbers, as signs to navigate the world, relying on them to tell us where traffic is worst and what things cost and what are friends thinking and what our friends are doing and placing an Amazon order and letting Amazon “logistic” the delivery of the package, etc., etc.

And because we do this, data has become a crucial part of our infrastructure, enabling commercial and social interactions. Rather than just tell us about the world, data acts in the world. Because data is both representation and infrastructure, sign and system. As media theorist Wendy Chun puts it, data “puts in place the world it discovers”. Restrict it? Protect it? We live in a massively intermediated, platform-based environment, with endless network effects, commercial layers, and inference data points.

For those of you who followed my series last year on the Appboy and Disconnect data trackers, you know that each of us who use an Apple or Android phone (and even your laptop) will engage with 5,400+ trackers, mostly embedded in apps, most feeding to data brokers. According to the privacy firm Disconnect, you spew out an average of 1.5 gigabytes of data about yourself over the span of a month. But you need special software to track all this. This all feeds into the big GDPR problem over “transparency” and why EU citizens are struggling with “Data Privacy and Data Subject Access Requests” mandated under the GDPR: if you don’t know where your data is going, how can you ever hope to keep it private?

But I digress …

Another dumb response to content moderation/regulation is to say we should break up Facebook. Okay, so now what social media/communication app will people in Burma use (and they will), and how are you going to manage/police THAT? You haven’t solved the problem at all. It’s not as though any other platform does this any better. This problem is not solvable. What are your variables? Money, humanity, power, control, greed, and fear (primarily the fear of losing power). Facebook simply highlights all the issues and allows us to measure it via data analytics. Yes, it failed to fix this issue. But, kind reader, how do you cure corruption? Cure politics? Revenue models? Without defeating the purpose of internet? Ain’t so easy, is it?

These media platforms are neither telcos (with no control of what we say) nor newspapers or TV (with total control of what they say), but something else, in between, and we do not have a settled understanding of what that means nor consensus of how we should manage it. All of those problems that were front of mind in the last few weeks: it does seem that Facebook and Twitter (and to a lesser extent YouTube) were (mostly) on top of political misinformation, and so far we haven’t had another big Cambridge Analytica stye scandal (even if that turned out to be a scam). But it feels like Twitter will get a lot more credit than Facebook for aggressive labelling, and Facebook doesn’t seem to have much goodwill left.

What will happen, though? Setting aside Republican posturing about platform bias, reasonable people can believe that the “Section 230” liability environment for platforms should change, with more requirements for companies of any kind to control some kinds of content, but is there a path to doing that in the U.S. political system? (Meanwhile, of course, other countries will create their own liability rules, with implications for U.S. companies regardless of U.S. politics). This will also link to bias and explainability … “why were you suggested that post? Should you be suggested something else? Why were you suggested better jobs to apply for than me? Is LinkedIn sure it has no algorithmic bias?”  There are a lot of threads to be pulled on here, and pulled together.

The other side of this, of course, is antitrust. There is clearly a school of thought that U.S. companies in general and tech in particular is over-concentrated, and the U.S. government must rethink think this … as if the current political framework in the U.S. is capable of actually executing policy. So it’s tempting but hard to make an antitrust case against Amazon, easy to make it for Google and Facebook in online advertising (which is relevant to newspapers, who will push it), and fun to argue about Apple’s App Store. But, you can’t possible do all of this in one court case (it’s more like a dozen+ court cases), and breaking up natural monopolies often makes little sense (which I will cover in my monograph) because as I have written, many of the “Big Tech” problems people in the street worry about aren’t really competition problems at all.

Regulation is easy; regulation that does what you want is hard. And of course most of this actually conflicts with wanting more competition: regulations favor incumbents, moderation is expensive and privacy rules make it harder to switch services. This is how real policy works.

Finally … we spent the last four years blaming Facebook in particular and tech in general for Trump being in office, and now he isn’t. I wonder how that changes some of the conversation, and I also wonder how much removing Trump from office changes the tone of discourse in tech (and indeed the U.S.) more generally. All conversations to date have seemed more fraught with him yellowing the pool of public discourse. How long after he leaves office will Twitter ban his account?

 

That gust of wind you felt coming from Silicon Valley last week was the social media industry’s tentative sigh of relief. For the last four years, executives at Facebook, Twitter, YouTube and other social media companies have been obsessed with a single, overarching goal: to avoid being blamed for wrecking the 2020 U.S. election, as they were in 2016, when Russian trolls and disinformation peddlers ran roughshod over their defenses. So they wrote new rules. They built new products and hired new people. They conducted elaborate tabletop drills to plan for every possible election outcome. And on Election Day, they charged huge, around-the-clock teams with batting down hoaxes and false claims.

It appears those efforts averted the worst. Despite the frantic (and utterly predictable) attempts from Trump and his allies to undermine the legitimacy of the vote in the states where he is losing, there were no major foreign interference campaigns unearthed, and Election Day itself was relatively quiet.

NOTE: this seems to have been confirmed by the intelligence community via sites such as CyberBrief and DefenseOne which are plugged into the IC.

Fake accounts and potentially dangerous groups were taken down fairly quickly, and Facebook and Twitter were unusually proactive about slapping labels and warnings in front of premature claims of victory.

NOTE: YouTube was a different story, as I note below. The most common complaint was its slow, tepid response to a video that falsely claimed that Trump had won the election.

But … well, there were still plenty of problems. Election-related disinformation started trending up pretty quickly over the past weekend, and then spiked this week as votes were challenged in the courts, and conspiracy theorists capitalized on all the uncertainty to undermine confidence in the eventual results. As one Facebook executive noted “the attack surface and environment are huge”. Well, yes. See my comments in the section immediately above.

All of this sheds light on the very real problems these platforms still face. For months, nearly every step these companies have taken to safeguard the election has involved slowing down, shutting off or otherwise hampering core parts of their products – in effect, defending democracy by making their apps worse. For example, Facebook did the following:

* It added friction to processes, like political ad-buying, that had previously been smooth and seamless.

* It brought in human experts to root out extremist groups and manually intervened to slow the spread of sketchy stories.

* It overrode their own algorithms to insert information from trusted experts into users’ feeds.

* As results came in, it relied on the calls made by news organizations like The Associated Press, rather than trusting that their systems would naturally bring the truth to the surface.

Quite a shift for a company which for years envisioned itself as a kind of “post-human” communication platform. Zuckerberg always speaks about his philosophy of “frictionless” design – making things as easy as possible for users. Earlier this year during the U.S. Congressional hearings we heard “Facebook would become a kind of self-policing machine, with artificial intelligence doing most of the dirty work and humans intervening as little as possible”.

Well, not quite. Facebook realized it had to go in the opposite direction. As several media blogs stated, Facebook realized its systems sucked and it had to do more. So in the weeks up to the election, in addition to the points I noted above, we saw Facebook:

* Throttle false claims, and put in place a “virality circuit-breaker” to give fact-checkers time to evaluate suspicious stories.

* Temporarily shut off its recommendation algorithm for certain types of private groups, to lessen the possibility of violent unrest.

* As the surge of MisDisMal began to overwhelm it, take other temporary measures to tamp down election-related misinformation, including adding more friction to the process of members sharing posts.

Yes, all of these changes may, in fact, have made Facebook safer. But they also involve dialing back the very features that have powered the platform’s growth for years. As Kevin Roose, the social media analyst put it:

“This was a very telling act of self-awareness, as if Ferrari had realized that it could only stop its cars from crashing by replacing the engines with go-kart motors. If you look at Facebook’s election response, it was essentially to point a lot of traffic and attention to these ‘human hubs’ that were curated by real people! That’s an indication that ultimately, when you have information that’s really important, there’s no substitute for human judgment. But given the day-to-day volumes on Facebook, that is not sustainable without a vast army.”

Twitter, another platform that for years tried to make communication as frictionless as possible, spent much of the past four years trying to pump the brakes. It brought in more moderators, revamped its rules, and put more human oversight on features like “Trending Topics”. In the months leading up to the election, it banned political ads, and disabled sharing features on tweets containing misleading information about election results, including some from Trump’s account. Twitter put many of Trump’s tweets behind a warning label.

Camille François, the chief innovation officer of Graphika, a firm that investigates disinformation on social media, said it was too early to say whether these companies’ precautions had worked as intended. And she suspects that this level of hypervigilance will not last:

“YouTube didn’t act nearly as aggressively this week, but it has also changed its platform in revealing ways. Last year, it tweaked its vaunted recommendation algorithm to slow the spread of so-called borderline content. And it started promoting ‘authoritative sources’ during breaking news events, to prevent cranks and conspiracy theorists from filling up the search results.

All of this raises the critical question of what, exactly, will happen once the election is over and the spotlight has swiveled away from Silicon Valley. Will the warning labels and circuit-breakers be retired? Will the troublesome algorithms get turned back on? Do we just revert to social media as normal?

There were a lot of emergency processes put in place at the platforms. The sustainability and the scalability of those processes is a fair question to ask.”

 

And while the social media companies may have gotten through election night without a disaster, the real fights will continue … with huge fights ahead. Facebook and Google had planned to lift their bans on political advertising one week after the election concluded. But yesterday both platforms extended their bans for the next few weeks. Their press releases were the same and, more or less, said “allegations of voter fraud continue to circulate on social media despite state election officials from both parties finding no evidence to support those claims, so we are taking precautions”. It’s a wise move, in a way, because it’s an attempt to show they do not need regulation, that they have this covered. Self-regulation can work.

But I suspect they want to keep the focus on ads because people don’t like them, and it’s way harder to do anything about the real issue afflicting social media – the spread of misinformation in unpaid, organic posts.

The argument against this blanket political advertising ban? It comes ahead of two very important runoff Senate elections in Georgia which will make it difficult for candidates of either party to rally support online. I am in agreement with Republican digital strategist Eric Wilson (who I follow on Twitter to get the “other side” on numerous issues): this is like using a rusty ax for something that deserves a scalpel.

And when Facebook was attacked by all sides because of its decision, it came out with a 3 paragaph rebuttal which really did not help, and included this statement:

We do not have the technical ability in the short term to enable political ads by state or by advertiser, and we are also committed to giving political advertisers equal access to our tools and services.

I have a really hard time seeing one of the world’s most valuable companies, which prides itself on engineering talent and made $8 billion in profit last quarter, say it “does not have the technical ability” to do something. Yes, this stuff is extremely difficult but it really needed more than a 3 paragraph rebuttal. What was called for was one of those famous Facebook detailed “white papers” when it lauds the technology behind one of its latest products. But applied to content moderation: the Possible, the Impossible, the Vortex, etc.

Or this statement: “only 6% of content on Facebook is political”. If you’re wondering what “political” means over at Facebook, no definition was offered.

NOTE: if you want a deep dive into this, especially the politics of platforms, the politics of categories and the power to define I highly recommend “Making Up Political People: How Social Media Create the Ideals, Definitions, and Probabilities of Political Speech”. It’s a long read but worth your time.

So the arguments will rage on over how these companies will respond to other threats. And why can’t they “tweak” their systems for other issues like they did these last few weeks: “If you did this for U.S. elections, why not other countries’ elections? Why not climate change? Why not acts of violence?”

It ain’t going to be easy. As I noted in the section above, the continual speed, reach and lowering cost of online communication taking misinformation, disinformation and malinformation to exponential levels will make the attack surface even larger, even more intense.

 

 

 

When you dive into an analysis or discussion about the reduction in false and misleading narratives online, trying to forecast improvement in technological fixes and in societal solutions, your hopes will be dashed. Because there are enormous overarching and competing themes at work here. Those of us who do not think things will improve know that humans mostly shape technology advances to their own, not-fully-noble purposes and that bad actors with bad motives will thwart the best efforts of technology innovators to remedy today’s problems. The most-read essay in a series on this issue in The Atlantic was by Yuval Noah Harari entitled Why Technology Favors Tyranny.

These overarching and competing themes will be more fully discussed in my monograph but I’ll end this post with a few brief points. I am on the pessimistic, cynical side of this argument. My primary points are simple.

The fake news ecosystem preys on some of our deepest human instincts: Humans’ primal quest for success and power – their “survival” instinct – will continue to degrade the online information environment in the next decade. Manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.

Our brains are not wired to contend with the pace of technological change: The rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. We’ll have a future information landscape in which fake information crowds out reliable information. Widespread information scams and mass manipulation will cause broad swathes of public to simply give up on being informed participants in civic life.

Yes, there are experts and pundits who take a more positive side who expect things to improve generally. I will include them in my monograph. They invert the reasoning I noted above:

Technology can help fix these problems: These more hopeful experts said the rising speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in fake news and misinformation campaigns. Some predict better methods will arise to create and promote trusted, fact-based news sources.

It is also human nature to come together and fix problems: The hopeful experts take the view that people have always adapted to change and that this current wave of challenges will also be overcome. They note that misinformation and bad actors have always existed but have eventually been marginalized by smart people and processes. They expect well-meaning actors will work together to find ways to enhance the information environment. They also believe better information literacy among citizens will enable people to judge the veracity of material content and eventually raise the tone of discourse.

But I cannot agree with the optimists for many reasons. I’ll list just a few:

*Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. The information environment will not improve: the problem is human nature. The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.

*The problem of misinformation will be amplified because the worst side of human nature is magnified by bad actors using advanced online tools at internet speed on a vast scale. Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Since as far back as the era of radio and before, as Winston Churchill said, “A lie can go around the world before the truth gets its pants on.”

*Michael Oghia, an author, editor and journalist based in Europe, expects a worsening of the information environment due to five things that will only increase:

1. The spread of misinformation and hate

2. Inflammation, sociocultural conflict and violence

3. The breakdown of socially accepted/agreed-upon knowledge and what constitutes “fact”

4. The increasing digital divide of those subscribed (and ultimately controlled) by misinformation and those who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical thinking.

5. Further divides between communities, so that as we are more connected we are farther apart

*There are too many players and interests who see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.) These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information. Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.

*To put it simply, the internet is the 21st century’s threat of a “nuclear winter”, and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.

*Worse, getting back to “what is a fact”, more and more history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.

*It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring. As George Carlin said, “Never underestimate the power of stupid people in large groups”. Or even better, Kierkegaard: “People demand freedom of speech as a compensation for the freedom of thought which they seldom use”.

Concluding thought: At the end of the day, controlling “noise” is less a technological problem than a human problem, a problem of belief, of ideology, of education. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.

More to come.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top