SEC Suffers Significant Loss Against Binance: Why Is No One Talking About It?

By Kubs Lalchandani, Founding Partner, Lalchandani Simon PL

On June 28, 2024, late Friday evening, buried within a 90-page opinion published amid the post-presidential debate fallout and the Trump immunity decision news cycle, a D.C. District Court Judge, appointed by Obama, delivered a critical blow to the SEC's expansive theory on crypto regulation. The judge rejected the notion that digital assets remain securities perpetually, stating:

"Insisting that an asset that was the subject of an alleged investment contract is itself a 'security' as it moves forward in commerce... marks a departure from the Howey framework that leaves the Court, the industry, and future buyers and sellers with no clear differentiating principle between tokens in the marketplace that are securities and tokens that aren’t. It is not a principle the Court feels comfortable endorsing or applying."

Despite the significant implications, this ruling has gone largely unnoticed in the media, primarily because most of the SEC's claims against Binance will continue. However, the court's complete rejection of the SEC's theory concerning secondary market sales of digital assets is a major setback for the regulatory body. The remaining claims are now focused on Binance's alternative services, such as staking and vault products. Interestingly, because the SEC is able to proceed on most of its claims based on non-secondary market sales offerings, the headlines for the most part have portrayed this as an SEC win. In fact, it is one of their most devastating losses.

This rejection of the SEC's embodiment theory is crucial for several reasons. Firstly, it aligns with Judge Torres's understanding of digital assets, which largely cuts against the SEC's ability to dismiss the Ripple Labs case as an outlier. Secondly, it highlights the SEC's inconsistency in its approach, emphasizing to the public, the Biden administration, and other judges the need for clear regulatory rules rather than broad, ambiguous claims.

Background: The SEC's case against Binance, CZ, and related entities involved several alleged securities law violations. The SEC argued that secondary sales of digital assets constituted securities transactions, based on the initial sales meeting the Howey test criteria. Judge Berman Jackson disagreed, particularly with the notion that secondary sales of BNB (Binance's corporate token) and other listed "crypto asset securities" could be considered securities.

Judge Berman Jackson adopted an analytical framework similar to Judge Torres's decision in Ripple Labs, recognizing the distinction between initial and secondary market sales:

"Whether a secondary market sale constitutes an offer or sale of an investment contract would depend on the totality of circumstances and the economic reality of that specific contract, transaction, or scheme."

In applying this test, Judge Berman Jackson criticized the SEC's blanket assumption that secondary market sales of BNB on Binance's platform were securities. Additionally, she dismissed the SEC's references to other crypto assets on Binance, noting that the issuers were not parties to the action and had not been given an opportunity to respond to the SEC's claims.

This ruling suggests a need for the SEC to rethink its approach to regulating crypto exchanges. The importance of this opinion to the crypto industry cannot be overstated:

1.     It is the first case to expressly reject the SEC’s “embodiment” theory on digital assets.

2.     As a D.C. Circuit Court ruling, it could lead to a potential conflict with the Second Circuit, where the Coinbase action was filed, and whose judge largely agreed with the SEC's theory.

3.     It underscores the growing judicial criticism of the SEC's regulation-by-enforcement strategy, which has created a lack of clarity and consistency in the crypto regulatory landscape.

Social Media: Is Florida working the wrong angle?

By: Talia Boiangin, esq., CIPP/US

Florida took a slightly unpopular position recently when Governor DeSantis signed into law the Stop Social Media Censorship Act (the “Act”). The Act has an effective date of July 1, 2021, and regulates “Big Tech” by requiring social media platforms to disclose their methods used to “shadow ban” and by preventing the platforms from deplatforming or shadow banning candidates for political office. Most of the Act will require an extensive overhaul on companies’ algorithms and policies, such as creating the ability for a person who has been banned to still have access to their content and material for sixty days. While that seems simple on paper, companies will have to rewrite or create a sandbox that is entirely different from their current platform. That is, they’ll have to think through a way where a blocked user can have access to only his or her own content, without access to the rest of the platform. These rewrites will surely take quite a bit of time, yet with less than two months’ notice to comply, companies will be forced to take shortcuts and potentially create vulnerabilities to users in the process. Even so, the Act has already been challenged in court and online as unconstitutional due to its attempt to force private entities to publish or broadcast someone else’s speech, particularly political candidates.

Once the Act comes into effect, companies are not entirely vulnerable; Section 230 of the Communications Decency Act is a federal law that holds online platforms immune from liability over their content moderation decisions. While Section 230 has seemed to do more harm than good in areas like cyber bullying, nonconsensual pornography, and more, its stands as one of the most persuasive arguments for companies that refuse to treat candidates differently, simply because they filled out a form to run for office. Section 230 needs to undergo major workshopping to rethink the state of the Internet now, since it was created at a time when most people could not predict the Internet’s influence in every aspect of an individual’s life. However, such issue should not dispel the fact that the Florida social media act is also approaching the same problem with an entirely different perception of reality. Sure, social media platforms and the companies who own them need to be regulated, but don’t we think that consumer privacy of our most cherished personal information should be at the forefront of this battle, and not a political candidate’s access to potential voters? Shouldn’t we be working on a way to protect everyone online, and not just the ones that run for office, or run a “journalistic enterprise?” This Act has sprinkles of provisions for the everyday user of a social media platform, however it’s clear as day that the biggest focus is on those running for office and being muted for their views, which at times have proven sorely inaccurate and dangerous to our nation. 

Perhaps one of the most invasive sections of the Act is where the Florida Department of Legal Affairs is granted investigative powers to subpoena any algorithm used by a social media platform related to any alleged violation. Surely, these algorithms are protected by trade secret or confidentiality laws, which means that the Department will have major pushback when issuing subpoenas.

But what exactly is this Act, and what types of companies should invest time and energy reviewing it? Here, I break down who this Act applies to and how it interacts with political candidates, journalistic enterprises, and a consumer, despite its minimal application to an everyday consumer. In the age of cyber bullying, nonconsensual pornography, terrorism, and fake news, the Florida legislature decided that a political candidate’s agenda and fake news websites were more important than passing a privacy bill to protect an everyday consumer’s data online. Surely if DeSantis is interested in “We the People,” he wouldn’t be wasting so much energy on “I the Candidate.”

A.   Who this Act Applies To

Social media platforms are under heavy fire with this Act. Those that qualify as a social media platform (e.g. Facebook, Twitter, and Instagram) are going to find themselves not only subject to this law, but subject to the jurisdiction of the state.  

The Act defines a “social media platform” as any “information service, system, internet search engine, or access software provider that:

1.     Provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site;

2.     Operates as a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity;

3.     Does business in the state; and

4.     Satisfies at least one of the following thresholds:

a.     Has annual gross revenues in excess of $100 million, as adjusted in January of each odd-numbered year to reflect any increase in the Consumer Price Index.

b.     Has at least 100 million monthly individual platform 468 participants globally.

Interestingly enough, the Act purposely exempts such platform that owns and operates a theme park or entertainment complex. While Disney and Comcast are in the clear, other social media platforms will have to scramble to rewrite code, discuss disclosure policy revisions, and even consider fighting back on the provisions of this Act.

A user can only bring a private cause of action if the social media platform does not apply its standards in a consistent manner (more on that below) or if the user was deplatformed without being notified. Violations under this private cause of action can be up to $100,00 in statutory damages per proven claim, actual damages, punitive damages of aggravating factors are present, other forms of equitable relief (including injunctive relief) and possible costs and reasonable attorney fees based on the circumstance.

And if attorneys for these companies were considering throwing a “lack of personal jurisdiction” argument, the Act takes care of that too: “a social media platform that censors, shadow bans, deplatforms, or applies post-prioritization algorithms to candidates and users in the state is conclusively presumed to be both engaged in substantial and not isolated activities within the state and operating, conducting, engaging in, or carrying on a business, and doing business in this state, and is therefore subject to the jurisdiction of the courts of the state.”

B.    Florida Political Candidates.

The Act provides special privileges to a “candidate,” which in Florida means a person who files qualification papers and subscribes to a candidate’s oath as required by law. According to the Act, a social media platform cannot deplatform a candidate for office, beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate. Further, the platform is prohibited placing, featuring, prioritizing, or limiting the exposure of content posted by a candidate. The Act defines these practices as “post-prioritization” and “shadow banning.” Easier said than done, as platforms don’t have simple “on-off” features for the algorithms used, and will require extensive hours to create such a feature.

The Act also requires social media platforms to develop a method that can be used to identify a user as a qualified candidate through the review of the Division of Elections or the website of the local supervisor of elections. In such a short amount of time, companies who qualify as a social media platform have to either write an algorithm that combs the Florida Division of Elections website or other websites, or hire individuals to manually comb the websites until such algorithm is created. The Florida legislature has no understanding of tech companies’ implementation process, but rest assured, creating a new feature does not happen overnight. Once developers create such capabilities (after many sleepless days), the code then has to go through many layers of approval. If tech companies were so confident in how their code operates and interacts with consumers around the world, in-house counsel positions would not be so popular. With little to no time to comply with this Act, these relevant tech companies are looking at a hefty violation fine of $250,000 per day for a candidate for statewide office, and $25,000 per day for a candidate for other offices by the Florida Elections Commission.

Less interestingly for an everyday consumer, but more important for the IRS, if the platform willfully provides free advertising for a candidate, it must inform the candidate of such in-kind contribution. Posts, content, material, and comments by candidates which are shown on the platform in the same or similar way as other users’ posts, content, material, and comments are not considered free advertising.

While Section 230 will be a strong argument if the State attempts to target a platform who banned a political candidate for any reason, companies will still have to work out ways to comply. It would also be laughable to think that companies had to undergo a huge process to implement this online superpower for political candidates, when the candidates that are being protected can be as problematic as Laura Loomer, a candidate who ran for Congress in Florida after being banned from Facebook and other platforms for touting herself as a “proud Islamophobe.”

C.    Limitations on Standards, Proprietary Information, and Procedures.

Social media platforms will be required to publish the “standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban.” This section is a gateway into the behind-the-scenes of platforms and how they operate. If that seemed like a stretch, the Act also grants the Florida Department of Legal Affairs the power to subpoena any algorithm used by a social media platform. While most social media platforms still rely on user reporting and human review to monitor content, they process massive volumes of information and therefore rely heavily on automated content moderation. Such details of how the system works are usually considered confidential information and a trade secret. Social media platforms derive a majority of their revenue from selling targeted-ads, which uses automated content moderation. The algorithm used to ban hate speech may also be used on other aspects of the platform, such as deciding whether a user is more likely to react to a sponsored ad about dog collars, or about cat litter. In addition, the algorithms used by social media platforms were trained on large datasets in order to be able to classify material. Allowing access to this kind of information would be a huge violation of consumers’ data privacy rights in certain states and countries and would leave open many vulnerabilities to the platform and its users.

The Act also seeks to regulate how these platforms apply censorship, deplatforming, and shadow banning standards, requiring them to be applied in a “consistent manner.” While there are issues of companies like Facebook not doing enough to protect others from cyberbullying and other harmful situations online, the Act assumes that companies have the current capability to regulate their cyberspace in a “consistent manner.” For example, such algorithms already in place only work in certain languages, which makes it nearly impossible in this present day to apply equally across countries and communities around the world. Machine learning systems are also not 100% successful; as many are aware, they’re trained on massive datasets but often struggle with complex material. Therefore, these algorithms generally lack the ability to consider context in determining whether content is problematic or not. While user reporting and human review are still integral parts of the moderating process, it is hard to make the decision to ban someone for a sarcastic inside joke with a friend, or a picture of a child that happens to be their niece. Even so, the Act does not explain what a “consistent manner” standard entails and leaves such interpretation to the State to decide.  

The Act also delays the process of shadow banning and deplatforming a user by requiring the user to be notified. It is alarming that the now-notified and banned user will have access to retrieve all of the user’s information, content, material, and data for at least sixty days after the user is notified of being deplatformed. First, this defeats the purpose of banning, because those who get deplatformed for a violation such as harassment will still have access to its content. Allowing a user access to a miniscule part of the social media platform is not as simple as turning off access to other parts. Platforms like Facebook, Twitter, and Instagram have been designed to be integrated. Having access to one’s Facebook page could really be a huge loophole in being banned. For example, a user was banned for posting graphic images on his page, and tagging another targeted user. The banned user also has many images in his Facebook photo album, with comments that are in violation of Facebook’s guidelines. Granting him access to just those photos is not a simple feature to create. Facebook developers have to design a way for the banned users’ accounts to be blacklisted from the site, but only in certain aspects. Such feature will most likely only be achieved by creating a sandbox, which is an identical version of the platform that is wholly unworkable except for the user’s page. Sandboxes, or portals that only allow 1% of access to an entire platform, are not written overnight. However, to delay any violations of the Act, platforms can still attempt to hide under the protections of Section 230 in order to limit liability and obligations.

D.   Journalistic Enterprise

Another important part of this Act is that a social media platform is prohibited from censoring, deplatforming, or shadow banning a “journalistic enterprise” based on the content of its publication or broadcast. A journalistic enterprise is defined as an entity doing business in Florida that (1) publishes at least 100,000 words available online with at least 50,000 paid subscribers or 100,000 monthly active users, (2) publishes 100 hours of audio or video available online with at least 100 million viewers annually, (3) operates a cable channel that provides more than 40 hours of content per week to more than 100,000 cable television subscribers, or (4) operates under a broadcast license issued by the Federal Communications Commission. This directly conflicts with company policies such as Google and Facebook, who ban sites that misrepresent, misstate, or conceal information. Whether or not Facebook and Google are doing a good job at banning fake news sites is a different topic. However, this protection damages the strides that organizations have made in advocating for better regulation of websites masked as journalism. For example, conspiracy sites aimed to spread misinformation of Covid vaccines or even sensitive topics like mass shootings can qualify based on the amount of followers they have, and therefore be protected under this Act.

Netchoice LLC v. Moody, et al.

Two associations comprised of online businesses and other companies in the computer, Internet, information technology, and telecommunications industries filed a complaint against the Attorney General of the State of Florida, the Vice Chair and Commissioners of the Florida Elections Commission, and the Deputy Secretary of Business Operations of the Florida Department of Management Services shortly after Governor DeSantis signed the Act into law. The lawsuit argues that the Act is a violation of online companies’ First Amendment rights and strongly interferes with content moderation of the most harmful, offensive, or unlawful material such as “pornography, terrorist incitement, false propaganda created and spread by hostile foreign governments, calls for genocide or race-based violence, disinformation regarding Covid-19 vaccines, fraudulent schemes, egregious violations of personal privacy, counterfeit goods and other violations of intellectual property rights, bullying and harassment, conspiracy theories denying the Holocaust or 9/11, and dangerous computer viruses.” Supreme Court precedent has held that online platforms have “editorial discretion over speech and speakers on their property.” Even so, the plaintiffs make the argument that this Act is preempted by Section 230 and Florida’s authority under the Constitution’s Commerce Clause, because the Act engages in protectionist discrimination against online businesses and favors major Florida-based businesses and Florida candidates. Recently, a federal judge weighed in on the constitutionality of the Act as well.

Privacy First.

This Act came as a shock to many privacy advocates. Prior to its enactment, the Florida House overwhelmingly approved a detailed consumer data privacy law. HB 969 applied to businesses who meet the definition of a “controller” and or a “processor.” Under this bill, a “controller” had a similar definition to California Consumer Privacy act and EU’s General Data Protection Regulation (GDPR). A for-profit business that meets at least two of the following thresholds will be considered a controller and therefore subject to this law if enacted: (1) has a global annual gross revenue of greater than $50 million; (2) annually buys, receives, sells, or shares the personal information of 50,000 or more consumers, households or devices for targeted advertising; and (3) derives at least half of its global annual revenues from selling or sharing personal information about consumers. The bill required controllers, among other things, to maintain an update an online privacy policy annual, inform consumers about categories of personal information being collected at or before the point of collection and collect only the categories that the consumer has been given notice of; implement and maintain reasonable security procedures and practices to protect personal information from unauthorized or illegal access, destruction, use, modification, or disclosure; and prohibit any contracted parties from selling or sharing personal information and from retaining, using, or disclosing personal information other than for the purposes specified in the contract with the business.  

The bill provided a private right of action, and was seen as a ray of sunshine for Florida consumers looking for privacy protection online. However, all hopes and dreams were shot down when the Senate passed its own version of a privacy legislation. The Senate’s version removed a private right of action and defines which businesses are obligated without using a dollar threshold. The bills did not have enough time to be reworked and did therefore did not pass before the legislature closed on April 30. 

Despite the growing trend of enacting data protection laws to protect the everyday consumer, Governor DeSantis and the Florida legislature were looking at tech policy in an unflattering angle when he approved the Stop Social Media Censorship Act. The Act gives very little time for companies to comply fully, and even so, I anticipate that the companies will not go down without a fight. The biggest arguments on their side are the unconstitutionality of the Act (which is being challenged in Netchoice LLC v. Moody) and its preemption by the outdated, yet still legal Section 230 of the Communications Decency Act.

If you have any questions, such as whether your company must to disclose confidential information such as trade secrets or proprietary code, and want to ensure this information stays protection, contact Lalchandani Simon PL at info@lslawpl.com or at 305-999-5291

Apple’s New Disclosure Requirement Comes into Play Soon

By: Talia Boiangin, J.D., CIPP/US

In June 2020, during its annual Worldwide Developers Conference (WWDC), Apple announced major changes to increase privacy for its users. Mobile app developers, as well as Mac developers, will have to evaluate and disclose their data collection policies to avoid intense scrutiny from Big Brother’s biggest rival. Apple delayed enforcement of one requirement that allows users to opt out of tracking by apps until “early next year.” Although this change will be huge for users, Apple has recently come under fire by privacy activist Max Schrem, who claims that the company itself is violating EU law by tracking users without their consent. Apple denies the allegations and is proceeding forward with requiring app privacy details from App Store developers starting December 8, 2020.

The amount of information required could prove difficult and time-consuming to companies without in-house legal counsel. Without a good understanding of what disclosures are required, such companies risk losing revenue. App developers will have to disclose all types of data that they and their third-party partners collect by answering questions on App Store Connect. The goal is to allow users to understand how mobile apps collect personal data. They must keep their responses accurate and up to date, which could prove burdensome for developers with a full suite of apps or a habit of shifting their business models. If the developer does not update its response, the app won’t be allowed in the iOS App Store or Mac App Store. This sounds more like a Big Brother persona than Big Brother’s rival, but Apple maintains that “our aim is to always protect the privacy of our users.” Hilary Wandall, TrustArc’s SVP, Privacy Intelligence and General Counsel says “Apple’s requirements should serve as the tipping point for making privacy nutrition labels mainstream. . . . These new requirements also raise the bar for app developers to know their data, data practices, and data sharing in order to update their apps or launch new ones starting December 8th.”  

Apple does list four exceptions to mandatory disclosure of such collection practices:

1.     The data is not used for tracking purposes, meaning the data is not linked with Third-Party Data for advertising or advertising measurement purposes, or shared with a data broker. For details, see the Tracking section.

2.     The data is not used for Third-Party Advertising, your Advertising or Marketing purposes, or for Other Purposes, as those terms are defined in the Tracking section.

3.     Collection of the data occurs only in infrequent cases that are not part of your app’s primary functionality, and which are optional for the user.

4.     The data is provided by the user in your app’s interface, it is clear to the user what data is collected, the user’s name or account name is prominently displayed in the submission form alongside the other data elements being submitted, and the user affirmatively chooses to provide the data for collection each time.

This new disclosure requirement is a step up from Apple’s privacy notice requirement. This might be useful in more than one way: keep your app active on the most popular app store in the world, while also staying apprised of your company’s privacy practices and staying one step ahead of the ever-changing privacy landscape. If you have any questions or want to ensure your app doesn’t lose it place in the App Store, contact Lalchandani Simon PL at info@lslawpl.com or at 305-999-5291.

Changes to Singapore’s Privacy Law Gives Businesses Exceptions to Collect, Use, or Disclose Personal Data Without User Consent

By: Talia Boiangin, J.D., CIPP/US

Businesses with an International user base may find themselves revising their privacy policies yet again. This week, Singapore Parliament passed a bill that amends the Singapore Personal Data Protection Act (PDPA), which came into effect in 2012. Major amendments to the PDPA include an updated consent framework, a substantial increase in the financial penalties cap, mandatory data breach notification to affected individuals and the Personal Data Protection Commission (PDPC), and data portability obligations. During a public meeting in Parliament, Singapore’s Communications and Information Minister S. Iswaran advocated for the amendments as a way to build trust with consumers in a digital economy with a complex data landscape, “and it will ultimately enhance Singapore’s status as an important node in the global network of data flows and digital transactions.” 

The new exceptions to obtaining consent for data collection, use, or disclosure are of particular importance. Organizations will be able to use data without consent for business improvement, such as enhancing or research and development (R&D) of products or services, deploying operational efficiency and service improvements, or learning more about the organization’s customers. Data may not be used if it is likely to cause an adverse effect on the user. The amendment also allows businesses to avoid obtaining consent where the legitimate interests of the organization and the benefit to the public together outweigh any adverse effect on the individual. Legitimate interests include use to prevent fraud, threats to physical safety and security, and use to prevent abuse of services. To utilize this consent exception, Iswaran said that “organizations must conduct an assessment to eliminate or reduce risks associated with the collection, use, or disclosure of personal data, and must be satisfied that the overall benefit of doing so outweighs any residual adverse effect on an individual.” He added that the PDPC will issue detailed guidance on the exception on how to identify an “adverse effect, which generally refers to any physical harm, harassment, serious alarm, or distress to an individual.”

A legitimate interest provision for data collection is nothing new. EU’s General Data Protection Regulation (GDPR) allows organizations to use personal data without consent under Article 6(1)(f), provided they show a legitimate purpose for use of such data. If the organization cannot show a legitimate interest, then the organization does not have a lawful basis for data processing under this provision. GDPR also requires a risk-based assessment of potential data user impacts as well as risk mitigation strategies. UK’s Information Commissioner’s Office (ICO) published guidance on the process for establishing legitimate interests:

1.     Purpose test – is there a legitimate interest behind the processing?

2.     Necessity test – is the processing necessary for that purpose?

3.     Balancing test – is the legitimate interest overridden by individual’s interest, rights or freedoms? 

Singapore’s legitimate interest exception seems to differ from GDPR’s, which is no surprise given the complex web of worldwide data privacy regulations. Increasingly, countries are creating or amending data protection laws as an attempt to accommodate future innovation and emerging technologies while balancing the necessary trust needed form consumers.

If you would like guidance to help ensure compliance with the changing privacy landscape and avoid large fines, contact Lalchandani Simon PL at INFO@LSLAWPL.COM or at 305-999-5291.