A brief history and overview of policies involving “public interest” requirements for commercial media and telecommunications companies;
The state of local commercial broadcast TV and radio news and information; and
The impact of media convergence and the emergence of the Internet, mobile technologies, and digital media on FCC media policy.
In my remarks, I focused on “Why Expansion of the FCC’s Public Interest Regulatory Regime is Unwise, Unneeded, Unconstitutional, and Unenforceable.” Down below I have attached my written remarks. Continue reading →
Mobile broadband speeds (at the “core” of wireless networks) are about to skyrocket—and revolutionize what we can do on-the-go online (at the “edge”). Consider four recent stories:
Networks: MobileCrunchnotes that Verizon will begin offering 4G mobile broadband service (using Long Term Evolution or LTE) “in up to 60 markets by mid-2012″—at an estimated 5-12 Mbps down and 2-5 Mbps up, LTE would be faster than most wired broadband service.
Devices: Sprint plans to launch its first 4G phone (using WiMax, a competing standard to LTE) this summer.
Applications: Google has finally released Google Earth for the Nexus One smartphone on T-Mobile, the first to run Google’s Android 2.1 operating system.
Content: In November, Google announced that YouTube would begin offering high-definition 1080p video, including on mobile devices.
While the Nexus One may be the first Android phone with a processor powerful enough to crunch the visual awesomeness that is Google Earth, such applications will still chug along on even the best of today’s 3G wireless networks. But combine the ongoing increases in mobile device processing power made possible by Moore’s Law with similar innovation in broadband infrastructure, and everything changes: You can run hugely data-intensive apps that require real-time streaming, from driving directions with all the rich imagery of Google Earth to mobile videoconferencing to virtual world experiences that rival today’s desktop versions to streaming 1080p high-definition video (3.7+ Mbps) to… well, if I knew, I’d be in Silicon Valley launching a next-gen mobile start-up!
This interconnection of infrastructure, devices and applications should remind us that broadband isn’t just about “big dumb pipes”—especially in the mobile environment, where bandwidth is far more scarce (even in 4G) due to spectrum constraints. Network congestion can spoil even the best devices on the best networks. Just ask users in New York City, where AT&T has apparently just stopped selling the iPhone online in order to try to relieve AT&T’s over-taxed network under the staggering bandwidth demands of Williamsburg hipsters, Latter-Day Beatniks from the Village, Chelsea boys, and Upper West Side Charlotte Yorks all streaming an infinite plethora of YouTube videos and so on. Continue reading →
“With a few notable exceptions, the tech industry seems unwilling to regulate itself. I will introduce legislation that will require Internet companies to take reasonable steps to protect human rights, or face civil and criminal liability.” – Senator Dick Durbin, as reported by the Washington Post.
We hear you, Sen. Durbin. The practices of many nations toward free speech and political dissidents are terribly wrong. But we respectfully and strongly disagree with your statements at yesterday’s Senate Judiciary hearing on global Internet freedom and the rule of law.
The growth of IT companies throughout the world has been an enormous boon to free speech and human rights. Although these technologies present new challenges, particularly when taken together with widely varying laws, they are doing far more good than harm, everywhere that they are deployed.
But if you attended the hearing and knew nothing about the Internet, you’d think that American online companies doing business in China and elsewhere were pure evil – as if they were the ones with the power to not comply with – or change — the criminal laws of other nations.
In particular, Facebook and Twitter were called out for not joining the Global Network Initiative (GNI). The product of more than two years of study and development by companies and public interest groups, the Initiative offers a set of guiding principles for global IT companies doing business in an increasingly global environment.
But while the GNI exposes online companies to new scrutiny, it doesn’t provide any protection from aggressive governments. And at a price tag of $200,000, the GNI isn’t cheap. How effective will it be, really, at changing the practices of totalitarian nations? Continue reading →
Very cool little video here by Jess3 documenting Internet growth and activity. Ironically, Berin sent it to me as Adam Marcus and I were updating the lengthy list of Net & online media stats you’ll find down below. Many of the stats we were compiling are shown in the video. Enjoy!
1.73 billion Internet users worldwide as of Sept 2009; an 18% increase from the previous year.[1]
81.8 million .COM domain names at the end of 2009; 12.3 million .NET names & 7.8 million .ORG names.[2]
234 million websites as of Dec 2009; 47 million were added in 2009.[3] In 2006, Internet users in the United States viewed an average of 120.5 Web pages each day.[4]
There are roughly 26 million blogs on the Internet[5] and even back in 2007, there were over 1.5 million new blog posts every day (17 posts per second).[6]Continue reading →
C-SPAN is really quite incredible when you think about it. When I was growing up in the 70s, there was nothing like it. Like most other Americans, my informational inputs about national news and politics were limited to what a couple of old white dudes in bad suits delivered each night around 6:30 on the three VHF channels I had access to. And no national newspapers were delivered to my small town in rural Illinois, so I had to rely on crummy local papers to fill the void via whatever national reporting they offered, which wasn’t much.
And then came C-SPAN. C-SPAN alone covers more political and civic-minded activity in the course of a week than most of us probably came into contact with in our entire lives just 30 years ago. Consider these data points, which Peter Kiley, Vice President of C-SPAN Networks was kind enough to help me aggregate. In the 2009 calendar year, C-SPAN provided the following amount of first run programming across their three channels:
8,438 overall hours of programming;
2,709 hours of House & Senate floor activity; and,
1,222 hours of House & Senate committee hearings.
Moreover, C-SPAN recently created the C-SPAN Video Library, which archives 23 years worth (1987-on) of fully searchable (and free) video content, including: Continue reading →
I recently wrote an op-ed for the American Legislative Exchange Council’s Inside ALECpublication. It’s decidedly non-technical, as most correspondance with a majority in the legislative branch must be. In my dealings with those in state government positions, it seems that only in the last few months have many of them become aware of the FCC’s Net Neutrality proposals — or even the issue itself. I don’t blame them. State legislators are often more concerned with local issues such as solving their budget deficits or finding funding for critical government operations.
But it’s important that they also keep an eye on what’s happening in “the other Washington,” (as we Washington state-ers like to call it) as the policies from Congress, the Administration and federal agencies trickle down to affect each and every one of us.
White House cybersecurity chief Mike McConnell had a 1,400-word piece in the Washington Post on Sunday in which he stressed a public-private partnership as the key to a robust cyber-defense. One paragraph caught my attention, though:
We need to develop an early-warning system to monitor cyberspace, identify intrusions and locate the source of attacks with a trail of evidence that can support diplomatic, military and legal options — and we must be able to do this in milliseconds. More specifically, we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment — who did it, from where, why and what was the result — more manageable. The technologies are already available from public and private sources and can be further developed if we have the will to build them into our systems and to work with our allies and trading partners so they will do the same.
I’m not sure what he’s talking about, and I’d love if a knowledgeable reader would chime in. I’m not sure how such a spoof-proof geolocation system would work without a complete overhaul of how the internet works.
Today’s The Wall Street Journal Europe published an editorial that Alberto Mingardi of Istituto Bruno Leoni and I penned about the competition complaints brought against Google in Europe.
If policy makers set the terms in a primitive year like 2010, nobody will have to respond to Google.
By WAYNE CREWS AND ALBERTO MINGARDI
Google isn’t a monopoly now, but the more it tries to become one, the better it will be for us all. Competition works in this way: Capitalist enterprises strive to gain in profits and market share. In turn, competitors are forced to respond by trying to improve their offerings. Innovation is the healthy output of this competitive game. The European Commission, while pondering complaints against the Internet search giant, might consider this point.
Google has been challenged by a German, a British, and a French Web site, for its dominant position in the market for Web search and online advertisement. The U.S. search engine is said to be imposing difficult terms and conditions on competitors and partners, who are now calling regulators into action. Google’s search algorithm is accused of being “biased” by business partners and competing publishers alike.
Before resorting to the old commandments of antitrust, we should consider that the Internet world is still largely impervious and unknown to anybody—including regulators. We are in terra incognita, and nobody knows the likely evolution of the market. But one thing is for sure: Online search can’t evolve properly if it’s improperly regulated—no matter the stage of evolution.
Because of some recent skepticism about the economic viability of open-source software (and because of an upcoming presentation I’m giving on the topic), I’m calling on the TLF readership to give me some examples of companies—from big-name brands to small design shops—that are making money through creating or contributing to open-source software projects.
I’m not just looking for millionaires like Matt Mullenweg of WordPress, I’m also looking for examples like design shops contributing to the development of projects like Drupal, independent developers promoting themselves through successful open-source products, or small-scale software support companies who also give back to the code base.
Please leave a comment with as many examples as you like.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →