Connect with us

Big Tech

With Google and Facebook Under Fire, Section 230 is at a Tipping Point as More Push for Changes

Published

on

Former FCC Chairman Reed Hundt speaks at The Capitol Forum on June 7, 2019 (Drew Clark / Breakfast Media)

WASHINGTON, June 12, 2019 – New critics of Section 230 of the Telecommunications Act seem to emerge every day on both the political right and the left.

The law states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

These 26 words are widely credited with creating the free-wheeling internet of today. It did this by shielding internet social media “publishers” – otherwise known as internet “platforms” – from liability for the content created by their users.

On Tuesday, conservative firebrand Rep. Matt Gaetz, R-Fla., became the latest on the right to fire at Section 230. At a hearing on Google and Facebook’s impact on journalism, Gaetz floated the possibility of removing or altering the Section 230 protections upon which the technology industry has come to rely.

He also imputed a kind of “fairness” or neutrality standard to which internet platforms must purportedly subscribe if they wish to retain the benefits from liability provided by Section 230.

From the left, Reed Hundt criticizes Section 230 protections as ‘naïve’

But it isn’t just conservatives gunning for Section 230: So are progressives, including Reed Hundt, the author of a recent book critical of what he calls Barack Obama’s “neoliberal” handling of the great recession.

Speaking about his book “A Crisis Wasted: Barack Obama’s Defining Decisions” at a Friday forum hosted by The Capitol Forum, Hundt concurred with some – on the left and on the right – who want to break up Facebook.

Hundt, the first chairman of the Federal Communications Commission under President Bill Clinton, said that he was “wrong” not to oppose Section 230 when it was introduced as part of the Communications Decency Act that passed in 1996.

Hundt contrasted the libel standard that governs traditional publishers like The New York Times. The landmark 1964 Supreme Court decision New York Times Co. v. Sullivan held that newspapers needed to have made a false statement with knowledge or reckless disregard of the truth to be guilty of libel.

It is that standard to which The New York Times is held when it decides to publish “user-generated content” like a letter to the editor. By contrast, Facebook takes much less care in its treatment of content posted by users on its site.

As a result, Facebook and other social media networks permit far more violence and hatred on their web sites than a traditional publisher like The Times would ever countenance on its web site.

Hundt said that the laissez-faire approach of the 1990s, including Section 230, was built around the presumption that “people are good.” Of the time, he now says, “we were very naïve.”

Section 230 took an alternative approach to incentivize online decency

Section 230 was drafted as part of the Communications Decency Act included in the Telecommunications Act of 1996. CDA barred “indecent” material online. It was struck down as unconstitutional in 1997 by the Supreme Court in Reno v. ACLU.

But Section 230 has remained the law of the land. And courts have read its provision to be quite broad in exempting technology companies from liability.

The provisions of Section 230 had originally been proposed as an alternative remedy to an outright ban of indecent content. Indeed, it gave an “interactive computer service(s)” protection for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

Today, with greater scrutiny on the market power and elements of toxic speech emanating from social media, Section 230 is under much greater fire from political and social leaders. And yet courts keep making it difficult to limit its scope and reach.

Courts can’t help themselves in broadly viewing Section 230, so prosecutors want legislative changes

On Friday, for example, the D.C. Court of Appeals cited Section 230 in holding that Google, Microsoft and Yahoo aren’t liable for hosting content posted by known scammers. A group of locksmiths had sued the platforms, claiming that the platforms were effectively engaging in a racket to incentivize legitimate locksmiths to buy ads in order to drive scammers lower on search results.

And that’s probably why state attorneys general are also not letting up in their criticism of Section 230. Last month, 47 of 50 attorneys general joined a letter of the National Association of Attorneys General supporting legislative changes to the law. They say (PDF) that Section 230 precludes state and local authorities from enforcing laws against “sex trafficking and crimes against children.”

The attorneys general continue:

  • “We sadly note that the abuse on these platforms does not stop at sex trafficking. Stories of online black market opioid sales, ID theft, deep fakes, election meddling, and foreign intrusion are now ubiquitous, and these growing phenomena will undoubtedly serve as the subjects of hearings throughout the 116th Congress. Current precedent interpreting the CDA, however, continues to preclude states and territories from enforcing their criminal laws against companies that, while not actually performing these unlawful activities, provide platforms that make these activities possible. Worse, the extensive safe harbor conferred to these platforms by courts promotes an online environment where these pursuits remain attractive and profitable to all involved, including the platforms that facilitate them.”

The attorneys general also urged Congress to amend Section 230 in 2013 and 2017. Congress took them up on their request, for the first time, when it made a change in 2018 with the passage of the “Stop Enabling Sex Traffickers Act” and “Allow States and Victims to Fight Online Sex Trafficking Act” (known as FOSTA-SESTA). Passed last year, the measure provides that Section 230 immunity does not apply against enforcement of federal or state sex trafficking laws.

Will changes to Section 230 help big social media companies at the expense of competition?

Some populist Republicans are treating Section 230 as if it were an all-purpose punching bag to go after technology companies and internet platforms.

At a May policy forum flaying Facebook, Sen. Josh Hawley, R-Missouri, said that Section 230 is “predicated on [platforms] providing open, fair and free platforms. If they are not going to do that, but insert their own political biases, then they start to look a lot more like a newspaper, or TV station, but don’t qualify for Section 230.”

At the same time, Hawley took a nuanced view about the possible effects that changes to Section 230 might have on startup companies attempting to compete against giants like Facebook and Google. “We need to make sure that [changes to Section 230 are] not a benefit to incumbency.”

Dan Huff, counsel to former House Judiciary Committee Chairman Bob Goodlatte, R-Va., said at the event that Congress should be more bold in exercising power. The House should use the threat of revising Section 230 as a weapon. This could force Google and Facebook to let the public know how their algorithms highlight particular search results or promote certain items within a user’s social news feed.

Congress should say to these companies: “Unless your make public the grounds on which you keep content off your platform,” we are going to eliminate or drastically scale back Section 230 protections, said Huff.

print

Big Tech

White House to Host Social Media Officials on Friday to Discuss Violent Extremism Online

Published

on

WASHINGTON, August 7, 2019 — The White House on Friday will host a meeting to bring together administration officials and technology executives to discuss ways to combat violent extremism on the internet, a senior administration official told Breakfast Media.

“We have invited internet and technology companies for a discussion of violent extremism online,” the official said.

The official stressed that the meeting would led at the staff level with select senior White House officials in attendance “along with representatives from a range of companies.”

The Trump administration’s newfound interest in combatting online extremism comes in the wake of last weekend’s mass shooting at an El Paso, Texas Wal-Mart, which claimed the lives of 22 people.

The alleged perpetrator, a 21-year-old white nationalist, posted an online manifesto rife with anti-Hispanic and anti-immigrant rhetoric which closely tracked Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

The manifesto, entitled “The Inconvenient Truth,” was posted to the online platform 8chan. In it, the alleged perpetrator claimed that the shooting was in response to the “Hispanic invasion” of Texas.

Last weekend’s shooting came less than six months after another alleged mass shooter based in Christchurch, New Zealand, posted a similarly racist manifesto to 8chan before he shot and killed 51 people at two mosques.

While he did not address his own rhetoric’s role in inspiring the El Paso shooter in prepared remarks delivered on Monday, Trump did attempt to place some measure of blame for the shooting on the internet, which he said “has provided a dangerous avenue to radicalize disturbed minds and perform demented acts.”

“We must shine light on the dark recesses of the Internet, and stop mass murders before they start,” he said.

“The perils of the Internet and social media cannot be ignored, and they will not be ignored.”

Continue Reading

Big Tech

Seeking to Quell ‘Evil Contagion’ of ‘White Supremacy,’ President Trump May Ignite New Battle Over Online Hate Speech

Published

on

Photo of Vice President Pence beside Trump speaking on August 5, 2019, from the White House

WASHINGTON, August 5, 2019 — President Donald Trump on Monday morning attempted to strike a tone of unity by denouncing the white, anti-Hispanic man who “shot and murdered 20 people, and injured 26 others, including precious little children.”

In speaking about the two significant mass shootings over the weekend in Texas and Ohio, Trump delivered prepared remarks in which he specifically denounced “racism, bigotry, and white supremacy,” and linked it to the “warp[ed] mind” of the racially-motivated El Paso killer.

That shooter – now in custody – posted a manifesto online before the shooting in which he said he was responding to the “Hispanic invasion of Texas.” The shooter cited the March 15, massacre of two mosques in Christchurch, New Zealand, as an inspiration for his action.

In White House remarks with Vice President Mike Pence standing at his side, Trump proposed solutions to “stop this evil contagion.” Trump denounced “hate” or “racist hate” four times.

Trump’s first proposed solution: “I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike.”

That proposal appeared to be an initiative that was either targeted at – or potentially an opportunity for collaboration with – social media giants like Twitter, Facebook and Google.

Indeed, Trump and others on the political right have repeatedly criticized these social media giants for bias against Trump and Republicans.

Sometimes, this right-wing criticism of Twitter emerges after a user is banned for violating the social media company’s terms of service against “hate speech.”

In Trump’s remarks, he also warned that “we must shine light on the dark recesses of the internet.” Indeed, Trump said that “the perils of the internet and social media cannot be ignored, and they will not be ignored.”

But it must be equally clear to the White House that the El Paso killer – in his online manifesto – used anti-Hispanic and anti-immigrant rhetoric very similar to Trump’s own repeated words about an “invasion” of Mexican and other Latin Americans at the United States border.

Hence this mass murder contains elements of political peril for both Donald Trump and for his frequent rivals at social media companies like Twitter, Facebook and Google.

8chan gets taken down by its network provider

Minutes before the El Paso attack at a Wal-Mart, a manifesto titled “The Inconvenient Truth” was posted to the online platform 8chan, claiming that the shooting was in response to the “Hispanic invasion.” The killer specifically cited the Christchurch shooter’s white supremacist manifesto as an inspiration.

As previously utilized by Islamic terrorists, social media platforms are increasingly being utilized by white supremacist terrorists. In addition to posting his manifesto online, the Christchurch shooter livestreamed his attack on Facebook.

In April, a man posted an anti-Semitic and white nationalist letter to the same online forum, 8chan, before opening fire at a synagogue near San Diego, California.

And on July 28, the gunman who killed three people at a garlic festival in Gilroy, California, allegedly promoted a misogynist white supremacist book on Instagram just prior to his attack.

But Saturday’s El Paso shooting motivated some companies to act. Cloudflare, 8chan’s network provider early on Monday morning pulled its support for 8chan, calling the platform a “cesspool of hate.”

“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” wrote Cloudflare CEO Matthew Prince.

“It does nothing to address why mass shootings occur,” said Prince. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the internet’s.”

Prince continued to voice his discomfort about the company taking the role of content arbitrator, and pointed to Europe’s attempts to have more government involvement.

The Christchurch massacre opened a dialogue between big tech and European critics of ‘hate speech’

Following the Christchurch attack, 18 governments in May signed the Christchurch Call pledge (PDF) seeking to stop the internet from being used as a tool by violent extremists. The U.S. did not sign on, and the White House voiced concerns that the document would violate the First Amendment.

Dubbed “The Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online,” the May document included commitments by both online service providers, and by governments.

Among other measures, the online providers were to “[t]ake transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media.”

Governments were to “[e]nsure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content.”

Although Silicon Valley has had a reputation for supporting a libertarian view of free speech, the increasingly unruly world of social media over the past decade has put that First Amendment absolutism to the test.

Indeed, five big tech giants – Google, Amazon, Facebook, Twitter and Microsoft – voiced their support from the Christchurch Call on the day of its release.

In particular, they took responsibility for the apparent restrictions on freedom of speech that the Christchurch Call would impose, saying that the massacre was “a horrifying tragedy” that made it “right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”

In particular, they noted that the Christchurch Call expands on the Global Internet Forum to Counter Terrorism set up by Facebook, Google’s YouTube, Microsoft and Twitter in the summer of 2017.

The objective of this organization is focused on disrupting terrorists’ ability to promote terrorism, disseminate violent propaganda, and exploit or glorify real-world acts of violence.

The tech giants said (PDF) that they were sharing more information about how they could “detect and remove this content from our services, updates to our individual terms of use, and more transparency for content policies and removals.”

Will Trump politicize the concept of ‘hate speech’ that tech companies are uniting with Europe to take down?

In his Monday statement commenting on an ostensible partnership between the Justice Department and the social media companies, Trump referred to the need to the need to “detect mass shooters before they strike.”

And he had this specific example: “As an example, the monster in the Parkland high school in Florida had many red flags against him, and yet nobody took decisive action. Nobody did anything. Why not?”

Part of the challenge now faced by social media companies is frankly political. Although Twitter has taken aggressive steps to eradicate ISIS content from its platform, it has not applied the same tools and algorithms to take down white supremacist content.

Society accepts the risk of inconveniencing potentially related accounts, such as those of Arabic language broadcasters for the benefit of banning ISIS content, Motherboard summarized earlier this year based its interview with Twitter employees.

But if these same aggressive tactics were deployed against white nationalist terrorism, the algorithms would likely flag content from prominent Republican politicians, far-right commentators – and Donald Trump himself, these employees said.

Indeed, right after declining to sign the Christchurch call, the White House escalated its war against American social media by announcing a campaign asking internet users to share stories of when they felt censored by Facebook, Twitter and Google’s YouTube.

And in June, Twitter made it clear that they were speaking directly about Tweets that violated their terms of service by prominent public officials, including the president.

“In the past, we’ve allowed certain Tweets that violated our rules to remain on Twitter because they were in the public’s interest, but it wasn’t clear when and how we made those determinations,” a Twitter official said. “To fix that, we’re introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we’ll use it.”

White House officials did not immediately respond to whether the Trump administration was reconsidering its opposition to the Christchurch Call.

Will Trump’s speech put others in the spotlight, or keep it on him and his rhetoric?

In additional to highlighting the anticipated effort with social media, Trump had four additional suggested “bipartisan solutions” to the “evil contagion” caused by the Texas and Ohio mass shootings.

They including “stop[ing] the glorification of violence in our society” in video games, addressing mental health laws “to better identify mentally disturbed individuals,” keeping firearms from those “judged to pose a grave risk to public safety,” and seeking the death penalty against those who commit hate crimes and mass murders.

Trump’s advisers said that they hoped the speech would stem the tide of media attention being given to the links between his frequent use of dehumanizing language to describe Latin American immigrants.

As he delivered his prepared remarks from a TelePrompTer in a halting cadence, Trump appeared to be reading the speech for the first time. This led to an awkward moment when he suggested that the second shooting of the weekend – which had taken place outside a Dayton, Ohio bar – had been in Toledo, Ohio.

But despite displaying the visible discomfiture that is evident when he reads prepared remarks to the White House press pool cameras, Trump made an attempt to silence critics like former El Paso Congressman Beto O’Rourke – who just hours before had explicitly called the President a white nationalist – by calling for defeat of “sinister ideologies” of hate.

“In one voice, our nation must condemn racism, bigotry, and white supremacy,” Trump said. “Hate has no place in America. Hatred warps the mind, ravages the heart, and devours the soul.”

Trump did not elaborate on the hate-based motivations of the El Paso shooter. Rather than reflect on where the El Paso shooter may have gotten the idea that Hispanics were “invading” the United States, Trump cast blame on one of the targets often invoked by conservatives after such mass shootings, including video games.

Although Trump has previously delivered remarks in the aftermath of violent acts committed by white supremacists and white nationalists during his presidency, Monday’s speech marked the first time that the President had chosen to specifically condemn “white supremacy,” rather than deliver a more general condemnation of “hate.”

In his rhetoric, both on his Twitter account and on the campaign trail, Trump uses non-whites as a foil, beginning with his 2015 campaign announcement speech, in which he described Mexican immigrants as “rapists” who bring crime and drugs to America.

That rhetoric reappeared in the 2018 Congressional elections as Trump spoke about an “invasion” from South and Central America taking up a significant portion of his rally stump speech.

As the 2020 election draws nearer, Trump’s strategy this campaign seems to similarly demonize racial minorities and prominent Democrats of color, most recently Rep. Elijah Cummings, D-Md., the chairman of the House Oversight Committee.

Trump critics not appeased by his Monday speech

As a result, commentators said Monday’s condemnation of white supremacy marked a 180-degree turn for the President. But his performance did not leave many observers convinced of his sincerity.

House Homeland Security Committee Chairman Bennie Thompson, D-Miss., called the President’s speech “meaningless.”

“We know tragedy after tragedy his words have not led to solid action or any change in rhetoric. We know his vile and racist words have incited violence and attacks on Americans,” he said in a statement. “Now dozens are dead and white supremacist terrorism is on the rise and is now our top domestic terrorism threat.”

Sen. Ron Wyden, D-Ore., tweeted that Trump had “addressed the blaze today with the equivalent of a water balloon” after “fanning the flames of white supremacy for two-and-a-half years in the White House.”

Ohio Democratic Party Chairman David Pepper said Trump’s condemnation of white supremacy in Monday’s remarks could not make up for his years of racist campaign rhetoric.

“Through years of campaigning and hate rallies, to now say ‘I’m against hateful people and racism,’ is just hard to listen to,” Pepper said during a phone interview.

“Unless he’s willing to say ‘I know I’ve been a part of it’ with a full apology and some self recognition, it felt like he was just checking the boxes.”

Pepper suggested that Trump “was saying what someone told him to say,” and predicted that Trump would soon walk back his remarks, much as he did after the 2017 “Unite the Right” white supremacist rally in Virginia.

Charlie Sykes, a former conservative talk radio host and editor of “The Bulwark,” echoed Pepper’s sentiments in a separate phone interview, but also called out Trump for failing to speak of the El Paso shooter’s motivations.

“It was so perfunctory and inadequate because he condemned the words ‘bigotry and racism,’ but he didn’t describe what he was talking about,” Sykes said.

Sykes criticized Trump for failing to take responsibility for his routine use of racist rhetoric, including descriptions of immigrants as “invaders” who “infest” the United States.

“Unless you’re willing to discuss the dehumanization behind the crimes, the invocation of certain words doesn’t change anything.”

Another longtime GOP figure who Trump failed to impress was veteran strategist Rick Wilson, who cited it as yet the latest example of “the delta between Trump on the TelePrompTer and Trump at a rally,” a difference he described as “enormous.”

“Nothing about that speech had a ring of authenticity to it,” said Rick Wilson, a legendary GOP ad maker and the author of “Everything Trump Touches Dies.”

“The contrast between the speechwriter’s handiwork and the real Donald Trump…is rather marked,” he said.

Where does online free speech – and allegations of ‘hate crimes’ – go from here?

Although the social media companies are making more efforts to harness and expunge online hate, they are unlikely to be able to get very far without someone – perhaps even President Trump – crying foul.

Putting the politics of online hate speech aside, the U.S. does take a fundamentally different approach to freedom of expression than does Europe.

According to the Human Rights Watch, hundreds of French citizens are convicted for “apologies for terrorism” each year, which includes any positive comment about a terrorist or terrorist organization. Online offenses are treated especially harshly.

By contrast, the U.S. has a fundamental commitment to the freedom of speech—including speech that is indecent, offensive, and hateful.

The Supreme Court has ruled that speech is unprotected when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.”

But this exception is extremely narrow—in Brandenburg v. Ohio, the Court reversed the conviction of a KKK group that advocated for violence as a means of political reform, arguing that their statements did not express an immediate intent to do violence.

The limitations on government leave the responsibility of combating online extremism to the digital platforms themselves, said Open Technology Institute Director Sarah Morris at a panel last month.

“In general, private companies have a lot more flexibility in how they respond to terrorist propaganda than Congress does,” said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology. “They need to be clear about what their policies are and enforce them transparently.”

But companies also need to carefully consider how they will respond to pressure from governments and individuals around the world, said Llansó, adding that “no content policy or community guideline is ever applied just in the circumstances it was designed for.”

“As the experience of social media companies has shown us, content moderation is extremely difficult to do well,” Llansó concluded. “It requires an understanding of the context that the speaker and the audience are operating in, which a technical infrastructure provider is not likely to have.”

(Managing Editor Andrew Feinberg and Reporter Emily McPhie contributed reporting to this article.)

Continue Reading

Big Tech

Sen. Josh Hawley Speaks to the Snapchat Generation, Babysitting Them With 30 Minutes a Day on Social Media

Emily McPhie

Published

on

WASHINGTON, August 1, 2019 — Sen. Josh Hawley, R-Mo., took the stage at the National Conservative Student Conference on Wednesday to thunderous applause and dozens of phones held in the air to capture his entry on Snapchat and Instagram.

But Hawley, the youngest member of the Senate, is no friend to social media. On Tuesday, he introduced a bill targeting “addictive” engagement practices by social media platforms including YouTube’s autoplay, Snapchat’s streaks, and Twitter’s infinite scroll. The legislation would, by default, limit a user’s daily time on a platform to 30 minutes.

When asked by a student at Wednesday’s conference how preventing tech companies from innovating would solve any of the important issues around big tech, Hawley pounced on the word “innovation.”

“My biggest critique of big tech—besides the fact that they want to shut down our speech, and besides the fact that they’ve gotten rich on taxpayer dollars, and besides the fact that they’ve got great inside deals from government, … my biggest critique of big tech is, what innovation have they really given us?” Hawley asked.

“We just celebrated the 50th anniversary of the moon landing. Think about what the tech sector gave to America in the decade of the 1960s.”

“And what is it now that in the last 15 or 20 years the people who say they’re the brightest minds in the country have given this country? What are their great innovations? Autoplay? Snap streak? Ever additional refinements of a behavioral ad-based platform?”

Ironically, earlier in his speech, Hawley declared that “it’s time that we stood up to big government, to the people in government who think they know better.”

This week Hawley introduced his Social Media Addiction Reduction Technology Act (PDF). It includes language that “automatically limits the amount of time that a user may spend on those platforms across all devices to 30 minutes a day unless the user elects to adjust or remove the time limit and, if the user elects to increase or remove the time limit, resets the time limit to 30 minutes a day on the first day of every month.”

Issues of Section 230 limitations of liability

Hawley also repeated the claims made in his proposed legislation to remove Section 230 protections from big tech platforms.

“It is time that these big companies who have gotten insider deals with the government are actually held accountable,” he said.

“No one has gotten more of a sweetheart deal from the federal government than the big technology companies—Facebook, Google, Twitter. They’re treated differently than any other platforms or publishers in America, …and they’ve gotten rich and powerful and profitable because of it.”

“Now they’re saying that they should be able to decide whether or not conservatives get to speak on their platforms—what conservative speech is acceptable and what isn’t,” Hawley continued.

“They now want to censor and sit in judgement on our political speech…if they’re going to get these special deals from government, that shouldn’t happen.”

Hawley acknowledged that Facebook, Google, and Twitter have all appeared in front of Congress and testified under oath that they never censor content based on political viewpoint. But if this is really true, he said, the companies should open their books to an audit to prove it.

At those hearings, others introduced testimony, including a survey conducted by the Economist that found no evidence that Google biases results against conservatives, and a data analysis released by Twitter that found no statistically significant difference between the number of times Tweets were viewed that were sent by Democratic and Republican members of Congress.

Sen. Marsha Blackburn, chair of the Judiciary Committee’s Tech Task Force, also spoke about alleged social media bias. She proposed a more cautious solution of “a framework to guide these platforms without dictating every single business decision.”

But her overall message was still that “it is time for big tech to become accountable for what they have done to conservative voices on the internet and on these social media platforms,” she said.

Continue Reading

Copyright © 2018 Breakfast Media LLC Send tips, advertiser/sponsor inquiries, and press releases to press(at)beltwaybreakfast.com.