Interconnection Policy

Eli M. Noam*

 

Professor of Finance and Economics

Columbia University Graduate School of Business

Director

Columbia Institute for Tele‑Information

 

Contents

 

1      Interconnection as the key policy tool of telecommunications. 2

1.1       Why regulate interconnection. 5

1.1.1        Anti-monopoly rationale. 5

1.1.2        Transaction cost rationale. 5

1.2       Regulation of interconnection and unbundling in a competitive market 6

2      Interconnection as a tool for the creation of monopoly: the US experience. 7

3      Interconnection as a tool for competitive entry. 9

3.1       Reforming access charges. 10

4      Interconnection as a tool for protecting competition. 12

4.1       Local Competition. 12

4.2       Unbundling. 14

4.3       Quality. 17

4.4       Cable television interconnection. 18

4.5       Mobile interconnection. 20

4.6       Internet interconnection. 21

5      Pricing and pricing wars. 23

5.1       Regulated pricing of interconnection. 24

5.1.1        Zero‑charge (bill‑and‑keep) and lump-sum payments. 24

5.1.2        Average cost pricing. 25

5.1.3        Fully distributed cost pricing and two-part tariffs. 25

5.1.4        Price caps. 26

5.1.5        Ramsey pricing. 27

5.1.6        Wholesale pricing. 28

5.1.7        Efficient component pricing. 30

5.1.8        Marginal cost pricing. 33

5.2       Arbitrage. 35

5.3       Incremental cost 35

6      Interconnection around the world. 40

7      Interconnection and common carriage. 45

8      The future of regulation of interconnection. 48

 

9

Bibliography………………………………………………………………………..…...

51



 

1                                 Interconnection as the key policy tool of telecommunications

 

For more than a century, telecommunications around the world followed a classic model: a national monopoly owned or controlled by the state, centrally managed and providing a common public network.  By their very nature and tradition, these networks provide a small number of standardized and nationwide services, carefully planned, methodically executed, and universally distributed.  But over the past three decades, first in the United States and subsequently in much of the developed world, the forces of centrifugalism began to unravel this traditional system.  The driving force behind the restructuring of telecommunications was the shift toward an information‑based economy, which resulted in the rapid growth of telecommunications as the medium for the electronic transmission of information. 

The tension between the convergent forces of technology and the centrifugal forces of business competition is most pronounced on the front where they intersect:  the rules of interconnection of the multiple hardware and software networks and their integration into a whole.  As the various discrete networks grew, they had to interoperate. In the networks of networks, their interconnection became critical.  Control of interconnection by any entity, whether by government or by a private firm, became the key to the control of the telecommunications system and its market structure.

The regulation of interconnection has therefore emerged as the paramount tool of regulation and is likely to remain so into the reasonably foreseeable future, replacing the regulation of telecommunications retail pricing of network operators (rate-of-return), or of the entry of competitors.

The term “interconnection” is defined by the International Telecommunication Union as:

“The commercial and technical arrangements under which service providers connect their equipment, networks and services to enable customers to have access to the customers, services and networks of other service providers.” (ITU, 1995)

 

The most traditional form of interconnection has been parallel or cooperative interconnection.  In that arrangement, dominant carriers link up with carriers similarly dominant in other regions.  Their relation is that of partners and 2-way correspondents; they jointly extend network externalities to their customers and often raise their prices in a joint maximization strategy.  This cartel type of interconnection is in decline due to its inability to maintain control over entry.

The second classic interconnection arrangement is vertical, between a provider that possesses market power in one stage of the transmission chain and another provider that requires use of the bottleneck in order to provide service.  An example would be a long-distance company interconnecting into a local exchange carrier.  This type of interconnection has been contentious since the early days of telecommunications.  It has been studied and analyzed over the years, but new permutations beep emerging.  For example, the interconnection by Internet service providers into cable TV networks, as of fixed networks into mobile carriers.

More recently a third type of arrangement has been taking center stage, that of a horizontal interconnection, in which competitors for the same markets and customers link up with each other.  In the past this situation was suppressed by the stronger of the two parties, often with the support of government, sometimes in return for the fulfillment of a number of social obligations of redistribution.  In other cases, the horizontal participants were kept apart from each other by technology and regulation, as for cable television and telecom networks.  Today, many governments enable and even promote the emergence of such horizontal interconnection.

The term “interconnection” covers a wide matrix of relations.  On the physical level of transmission conduits they include linkages within and among various types of entities and industries:

§         Incumbent and new local telephone companies

§         Traditional and new long‑distance carriers

§         Mobile and radio carriers, including their access to spectrum

§         Domestic and international carriers

§         Dedicated “private networks” of organizations and user groups

§         Computer local area and wide area networks, in particular the components of the Internet, such as backbones

§         Telephone, computer, and video equipment

§         Cable television, broadcast and satellite networks

On the higher levels of applications and content, interconnection becomes an issue for entities such as:

- Internet service providers

- Enhanced (value‑added) service providers

- Data and information providers

- Video program channels

On a geographic level, interconnection issues cross national boundaries and involve the carriers, service providers, and national policy makers of many countries. The direction of flows is another dimension: terminating vs. originating traffic, and one-way vs. two-way directionality.

Given the multitude of entities, their points of intersection are numerous and growing, sois the number of disputes and issues -- technical, financial, operational, regulatory, international, and content-wise.  Their common thread is the transfer of information streams from network facilities of one communications entity to those of another.

 

1.1       Why regulate interconnection

1.1.1    Anti-monopoly rationale

There are two major explanations for government’s role, coexisting uneasily.  The primary explanation for a governmental role in assuring interconnection is market power. It starts with two assumptions: (1) that telecommunications are a service essential to society and economy and, (2) that monopoly provision is undesirable.  Given the incumbent’s head start of a full century, In economies of scale and scope, and the positive externalities of its reach, a new entrant cannot hope, it is argued, to succeed as a stand‑alone entity.  Yet the entrant must reach the customers of the incumbent and, in turn, be reachable by them.   Thus, if one wants to encourage competition to a strong incumbent, one must accompany it with an assurance of interconnection. And if the survival of fledgling competition is at stake, this rationale is readily expanded to justify interconnection on terms that are favorable for an entrant as an “infant” period.

The flip side of the anti-monopoly rationale is that a carrier without market power would owe no interconnection to anybody.  This means an asymmetrical arrangement among carriers. This, in turn creates instability.  If interconnection rights vanish with bottleneck power, the determination of that point is fiercely fought over.  The question, after all, is not an easy one to answer conceptually or empirically, and it may vary by location, service and customer class. 

 

1.1.2     Transaction cost rationale

The other major rationale for the regulation of interconnection might be called the “transaction cost” explanation.  This view centers on the positive externalities of networks.  Interconnection is designed to provide an element of integration to the increasingly disparate network environment.  Information flows across numerous pathways, in a chain of transmission involving half a dozen carriers.  Indeed, with packet-switched communication which is the mainstay of much of Internet communications, information between two points may travel simultaneously over a wide variety of paths.  In such an environment, interconnection rules are a transaction‑cost reducing arrangement, and as such are similar to legally imposed arrangements aimed at reducing transaction cost in other parts of the economy.  The interconnection rules may limit some freedom of negotiation, but they also facilitate commerce and transactions.  They establish symmetry in the treatment of various carriers, and eliminate continuous market power tests.

 

1.2       Regulation of interconnection and unbundling in a competitive market

The historic experience with interconnection around the world shows that interconnection is not made available freely by an incumbent to its competitors.  Nor is the claim to interconnection as a right given up voluntarily by new entrants once competition emerges.  On the other hand, interconnection is voluntarily initiated by collaborating and non-competing carriers, such as those of different countries.

Often, the terms of interconnection are left nominally or initially to the parties’ negotiation.  Yet regulatory intervention is frequent where there is an asymmetry in bargaining strength and in the urgency for interconnection, which is usually the case. Even where formal regulatory intervention does not take place, the negotiations are shaped by the expectations of what the regulator’s decisions would be.  Those decisions, in turn, depend on fundamental policy priorities. 

As a matter of empirical fact, interconnection is regulated everywhere where competitive telecommunications exist.  Even in New Zealand, which is supposedly without any telecommunications regulation, the courts of law and their interpretation of the statutes of general competition regulate interconnection.  The difference institutional—a general regulatory body vs. a specialized agency—and it is not clear whether their substantive policy decisions would be fundamentally different or better.

Today the antimonopoly and the transaction cost views coexist uneasily, but they differ in their perspectives of the future.  In the antimonopoly view, the regulation of interconnection is an essentially transitional task that will fade away with the emergence of real competition.  Interconnection regulation would decline over time.  In contrast, the transaction cost rationale comes to the opposite conclusion.  As open entry permits more and more carriers to offer services, the need for basic rules for their interaction becomes increasingly important if the overall network infrastructure is not to fragment into incompatible network parts.  The antimonopoly view is asymmetric, requiring interconnection by large carriers but not by their competitors.  In contrast, the transaction cost view is symmetrical, applying interconnection to all carriers.

 

2                                 Interconnection as a tool for the creation of monopoly: the US experience

 

Interconnection is not a new issue but goes back over a full century.  Control over interconnection was used to establish the monopoly system.  It was later used in the second stage of interconnection policy to introduce competition. In its third and present phase, interconnection policy is increasingly used to promote (as opposed to enable) competition.

In the United States, the initial monopoly was based on patents rather than regulation. Once the basic Bell patents expired in the 1890s, independent competitors entered, especially in rural districts and in central business districts.  The Bell Company’s initial policy was to refuse interconnection to the independents.  Its strategy centered on the control of interconnection: of equipment into their network, of rival local networks into the Bell local networks, and of rival networks to the Bell long-distance system.  Interconnection was granted, if at all, through a contractual agreement, because this allowed AT&T to exercise its substantial bargaining power.  In contrast, the new entrants preferred interconnection as a matter of legal right and sought mandatory interconnection supervised by the state.  This scenario is classic, and it has repeated itself in recent times around the world. 

In Europe, too, early control over interconnection was used to establish monopoly.  Especially in Sweden, Norway and Britain, competitors were initially successful, but monopoly soon took over through control over interconnection. No regulatory counter-force protected competitors against the dominant operator, which was government itself. 

In the United States, interconnection became regulated. In 1904 a federal court upheld the powers of a state to mandate the interconnection of rival networks.  By 1915 more than thirty states did so (Gabel and Weiman, 1994), and all still do so today.

Soon, however, interconnection was used to stabilize a cartel. Several independent telephone companies brought federal antitrust complaints against AT&T, based on its refusal to offer interconnection, and they were joined by the Justice Department in 1913.  Under pressure, AT&T accepted an agreement known as the Kingsbury Commitment which it granted the independents interconnection in return for joining an AT&T led cartel.  A system of de facto exclusive franchises emerged in which only one telephone company served any particular area.  That company was protected from rival entry because no rival had a right to interconnection.  Thus the independents now had an interest in an effective Bell system as an interconnecting agent, technology driver, standard setter, and cartel enforcer. As a result, the telephone industry moved from extensive competition to extensive oligopolistic cooperation.

This historic episode illustrates that the creation of interconnection, by itself, does not necessarily bring about competition, and can in fact lead to cartel cooperation that turns new entrants into complements rather than competitors (Mueller, 1988).  Thus, interconnection does not assure competition, but the lack of such interconnection has historically prevented its emergence.  Interconnection has been a necessary but not sufficient condition for competitive telecommunications.

 

3                                 Interconnection as a tool for competitive entry

 

For over seventy years AT&T’s control over interconnection provided it with the tools to establish a monopoly shared with small “independent carriers”.  However, the power that AT&T had ceded to the government to regulate such interconnection had the potential to turn against it.  This began to happen in the 1960s, and interconnection now became a tool for destabilizing AT&T.  Interconnection policy moved into its second stage, that of opening markets.

            The first step was to open interconnection of equipment.  The prohibition of such attachment has enabled AT&T to shift earnings to the unregulated manufacturing activities and away from the profit-regulated network services, and to shift costs in the other direction.  The two key decisions were Hush-A-Phone (1956) and Carterfone (1966) which allowed competitive equipment to be owned by customers and to be connected to the network.

When it came to the next chapter, the interconnection of long-distance networks, the United States began to move into uncharted water.  The availability of microwave transmission equipment after World War II drastically lowered economic and technological entry barriers for long distance communication.  In 1969, one company, Microwave Communications, Inc. (MCI) won a court ruling against a reluctant FCC and an adamant AT&T to provide private line service (FCC, 1969) This set the stage for a battle over long‑distance interconnection.  MCI soon wanted to expand into generally available public switched service.  To do so successfully, it needed to interconnect its long haul private lines with AT&T’s local networks in order to connect its subscribers to non-subscribers, and vice-versa, and won its court appeal against an unfavorable FCC ruling in the Execunet decision (1978). 

Interconnected competition had cataclysmic effect on the U.S. telecommunications industry.  Eventually it led to the breakup of the world’s largest telecommunications company, AT&T, based on an antitrust case, which asserted that incumbent had used unfair practices to suppress its competitors, especially through discriminatory interconnection practices.  The government’s lawsuit resulted, after a 1982 consent decree, in the most massive corporate reorganization in business history. 

 

3.1       Reforming access charges

 

The break-up of the Bell System left US regulators scrambling to create a new system of interconnection prices, balancing the efficiency goals of economics with the redistribution prices of social policy and inter-regional politics. The main questions were whether interconnection prices would be usage-sensitive or flat rate, and what their magnitude would be. Each answer had major implications for some industry segments, geographic region or user group.  In 1982, the FCC approved a new post-divestiture access charge plan (FCC, 1982). The case established economists’ preferences for flat charges to recover fixed costs, instead of usage-sensitive ones, because they would not distort usage or encourage uneconomic “bypass” of customers to other carriers.  But it also kept a strong element of redistribution and subsidy in order to maintain widespread telephone connectivity  (universal service).  The access charge plan included a flat monthly per line charge on users as well as a variable one on long-distance carriers.  The share of the costs thus recovered for interstate usage was 25 percent of total line costs, maintaining transfer from long distance to local service. This plan was followed by several adjustments.

            The Telecommunications Act (1996) changed many aspects of competition and regulation.  A section entitled “Interconnection Requirements,” was included in the law.  The Bell companies, eager to enter the long-distance market, had in return, to provide interconnection to new entrants, unbundle their network, allow the resale of their services by competitors, and provide for number portability. 

Following the mandate of the act the FCC took a further step to flat rate access charges, and removed some implicit universal service transfers in favor of explicit support mechanisms.  Subscriber line charges — the flat fees paid by customer directly—were raised, and the incumbent local exchange companies (ILECs) were authorized to assess another flat charge, a “pre-subscribed interexchange carrier charge” (PICC) on the long distance carrier chosen by the end user.  After further modification towards a flat rate system, by the year 2001, per minute interconnection charges between incumbent LECs (CLECs) and competitive LECs typically ranged from 0.27¢ to 0.55¢ per minute, and were coming down toward 0.10¢.  Per minute termination rates for different types of wireless, paging, and interexchange service were roughly 0.3¢, 0.4¢, 0.55¢, respectively.  These numbers were declining, and the spread between them was narrowing, reducing the incentives for arbitrage. The per-minute charges for enhanced services, including Internet telephony, remained zero, leading to suggestions that all interconnection access charges be abolished (bill-and-keep), which would complete the transition from usage-surcharge to flat-rate charges.  However, universal service subsides were maintained and partly supported by a surcharge on users’ phone bills, which were sensitive at least in so much of their components.

4                                 Interconnection as a tool for protecting competition

 

4.1       Local Competition

Competition in local infrastructure is the toughest challenge for entrants, given the large investments needed.  It also the key to a “level-playing field” for the other telecom services.  Since virtually every communications flow ultimately has some local component, a monopoly in that segment affects all of telecommunications.  It is therefore not surprising that policy makers, once they have embarked on a pro-competition strategy, were eager to remove the last bottleneck.  Even if deregulation-minded, an active interconnection policy become the tool to accelerate local entry.

In the United States, in the mid-1980s a second wave of competitive entry into telecommunications began. Private line dedicated local service was first approved in the US by the New York Public Service Commission in 1985 for Teleport Communications.  New York was also the first state to subsequently permit competitive switched local exchange service.  By 1995 most major states had approved competitive local entry.  The federal 1996 Telecommunications Act extended this across the rest of the country.

The viability of local competition rises and falls with interconnection.  The critical issue surrounding the competitive local exchange companies -- the CLECs -- is whether they can cost-effectively interconnect to the incumbents’ – the ILECs’—network under the same conditions provided by the integrated incumbent to its own operations  “comparably efficient interconnection” (CEI).  Whether CEI needed to be offered by ILECs to their rivals, and what CEI actually means in technical and economic terms becomes a subject of intense struggle.

The purest form of CEI is the placement CLEC of cables and equipment inside the ILEC’s central office, known as “physical collocation.”  This was resisted by the incumbent as intrusive traffic. An alternative is a handoff at an outside meetpoint such as a manhole, from whence the ILEC would carry the CLEC’s traffic to its central office on its own facilities and charge for this transport element and associated equipment at retail rates. This alternative is more expensive for the CLECs.  A third arrangement is “virtual collocation”, either with physical handoff taking place outside the LEC, or inside the central office, with the LEC owning all the cable and termination equipment, but with the charges being equivalent to those the CLEC would have incurred with “physical collocation”.

In the United States, the local entrant Teleport Communications gained in 1987 approval from the New York Public Service Commission for physical collocation of private line services.  This was soon expanded to switched public service.  Once several other states had liberalized local and physical or virtual collocation entry the FCC had enough “state cover” to establish national rules for collection for switched services in 1993.

            Such interconnection is available not only to CLECs but also to interexchange carriers and even to some end-users.  There is no reciprocity for physical collocation.  The rules on collocation have been attacked as a regulatory taking of private property without compensation. Whether they are or not is a matter of constitutional interpretation but they are certainly not “deregulatory,” but an active step to accelerate local entry. 

The 1996 Telecommunications Act modified FCC and state rules, requiring each telecommunications carrier to interconnect directly or indirectly with other telecommunications carriers and for ILECs to unbundle.  Interconnection agreements need to be approved by the respective state utility commission. Together with the FCC’s subsequent rules of implementation, a set of strong interconnection rights had been established for access into most local networks and incentives for the Bell Companies were created to open their local networks to interconnection if they wanted approval to enter most long-distance services.

 

4.2       Unbundling

Interconnection is fairly meaningless without reference to where interconnection would physically take place.  If an incumbent network offers an entrant interconnection at a far‑off point, little is resolved.  In consequence, interconnection points are established at various levels of a network, thereby "unbundling" it.

Unbundling requirements, too, are significant regulatory interventions. They claim to regulate in order to deregulate.  They aim to create viable competition, especially in the early phases of competitive entry when entrants are likely to be weak. There are several advantages to an incumbent in bundling:

1. Bundling forces a competitor to buy unneeded services to get needed ones, thus raising the competitors’ operating costs.  Why not simply charge a high price for the needed service element and unbundle it? Bundling is a tying action. Market power in one component can be extended by bundling to a component where market power does not exist. One reason to do that might be to shift profits and thereby hide them from regulators.

2. Bundling on the retail level against a competitor, using a monopoly input on the wholesale level, permits a price squeeze. 

3. Bundling permits price discrimination based on utilization among users.  IBM, for example, used to bundle its machines with a requirement contract for punch cards, which were priced above cost.  This allowed it to charge high‑volume users of its machines more than low‑volume users (IBM vs. US, 1936).

In a competitive environment, markets will determine the extent of unbundling that firms will offer. Bundling is primarily a problem where market power exists. Bundling raises entry barriers. Hence, where interconnection is actively encouraged, unbundling requirements eventually become regulatory policy.

To the incumbent, the greatest problem of unbundling, beyond the loss of the strategic tools described above, is that it is prevented from being the exclusive or primary beneficiary of superior efficiencies or talents in a particular market segment, because it cannot shield those efficiencies from access by its own competitors.  This is particularly a problem when unbundling and interconnection are asymmetric, leaving a competitor with access to an incumbent’s superior elements, while not having to grant those in reverse.

Suppose one has been able to unbundle the monopolistic parts of the network.  Do the monopolistic network elements need unbundling from each other?  Where the monopoly is otherwise unregulated, this seems pointless, because bundling unneeded service elements is just a way to raise prices, which could be accomplished in other ways, too.  But if regulation limits profits or prices on each element, requiring a competitor to take even unneeded ones is a way to raise the price.  In such cases, regulated unbundling is likely.

In 1996, following the requirements of the Telecommunications Act, the FCC issued a voluminous order (FCC, 1996).  It prescribed minimum points of interconnection as well as adopted a list of unbundled network elements (UNEs) that the incumbent LEC must make available for competitors.  These were seven: network interface devices, local loops, local and tandem switches, interoffice transmission facilities, signaling and call-related database facilities, operations support systems and information, and operator and directory assistance facilities.  Incumbent LECs were required to provide equal and nondiscriminatory access to these elements in a manner that allowed the entrant to combine such elements as they chose and prohibited incumbent LECs from imposing any restrictions on the use of these elements. In 1999, the FCC also unbundled the high frequency range on the copper loop, thereby making it available to rival digital subscriber loop (DSL) providers as a separate network element.  It also dropped operators and directory services, unbundled sub-loops, and established periodic reviews to carve out geographic regions and network elements that had become open and where entrants required no regulated UNEs. 

Unbundled network elements are priced in the US using forward-looking long-run incremental cost pricing principles  (TELRIC) discussed below.  The actual calculations are based on engineering models developed initially for the calculation of universal service cost allocations.  These models were then applied by the states and diverged widely, ranging from $3 for a local loop in one state to almost $30 in another.

The unbundling of the local loop facilitated entry in sub-markets such as local and tandem switching, and in the transport segments in between. It is harder to determine the impact on facilities-based competitive entry into the local loop itself.  On the one hand, it helped a competitor to stage a gradual move in the direction of the user; On the other hand, it also made it possible to enter local competition without the heavy investments in local loop infrastructure. On the whole, it provided more flexibility in providing local service, and in phasing out regulatory intervention when its need had run its course.

By the year 2000, the concept of network unbundling was firmly established in the United States, the European Union, Japan, and in the regulatory principles of the WTO. Though unbundling is a significant regulatory intervention, it had become a key tool for governments pursuing pro-competitive policies. Yet it was also possible to anticipate its reduction.  With competition, one could expect the unbundled elements to shrivel down to the last and most expensive to enter--the last part of call termination on the local loop beyond the switch, or even further downstream to the sub-loop. Unbundling facilitated such transition, though at the cost of greater operational and procedural complexity.

 

4.3       Quality

Society depends more and more on the availability of electronic communications.  The World Wide Web is an example.  User requirement keep increasing rapidly.  In consequence, demands on service quality increase because failure becomes more costly.  In a transmission sequence of multiple carriers, a signal quality will not normally be better than the "weakest link."  Hence, a bottleneck carrier with inferior quality could obviate the efforts of other carriers for higher quality. They might lower their quality to the lowest common denominator. Thus, overall quality would decline.

      This and similar reasons led to the fear that a decentralized competitive environment would lead to service degradation.  On the empirical level, measuring the quality service is quite complex and even more so across carriers. The term "quality' has many dimensions. Measuring problems abound. On the conceptual level, economic analysis does not provide unambiguous answers on what to expect to happen to quality as competition emerges monopoly need not compete for users by offering superior quality, but competition might lead to a low price, low quality equilibrium and regulatory incentives might lead a monopoly to overcapitalization and above-equilibrium quality. A more competitive regime may well reduce overcapitalization and lead to an economically more efficient, but lower-quality system.

 

4.4       Cable television interconnection

Some cable television operators, starting in Britain in the 1980s, began to offer local telephone service.  These calls are carried on the cable television companies’ separate, as integrated lines and are then usually interconnected with the networks of local exchange and long-distance carriers.

            In the emerging network of networks the distinction between telephone and cable lines blurs, since upgraded telephone lines are able to carry broadband services including video and high-speed Internet services, while cable lines connected to switching and routing equipment are able to perform traditional telephone functions, as well as provide Internet transmission.

This technical and business convergence raises the question of regulatory convergence.  It is relatively easy to deal with cable interconnection into phone networks.  Here, cable companies and are treated in the same way as any other local CLEC competitors.  But it is more difficult to deal with other types of services. A special problem is the issue of Internet connectivity specifically whether a cable television firm providing Internet-capable transmission may provide preferential terms to some Internet service providers (ISPs) and web portals, and discriminate against other.  Phone companies are restricted from similar discrimination by common carrier obligations.  This issue burst to the fore in 1998, when AT&T acquired the largest and third largest American cable TV companies, TCI and Media One.  AT&T’s strategic aim was to benefit from its economies of scope by offering alternative local telephone service bundled with long-distance, cable TV, wireless, Internet services, and its own cable-based ISP @Home and the web portal Excite. @Home and Excite received preferential access to AT&T cable customers.  AOL, the largest of ISPs initiated a fierce regulatory fight against AT&T through all levels of government. 

The rival ISPs demanded an equal access arrangement to cable TV networks. After much legal wrangling, an appellate court classified in 2000 the Internet access provided by a CATV company as a “telecommunications service” subject to federal regulation, which meant that cable companies in their Internet provisioning were potentially subject to all the regulations aimed at telephone companies, such as unbundling and interconnection requirements.  This set the stage for a process of regulatory convergence of mass media and telecom carriers, attendant with a full set of issues of great conceptual, technical, financial and political complexity.

Paralleling these developments was the approval process for the merger of the media giants Time Warner and AOL.  In 2001 the multi-trust agency FTC and the communications agency FCC conditioned their agreement to the merger to the companies committing themselves to meet several condition. Time Warner would have to provide high-speed cable access to at least three competitors.  Where agreements are not reached, the FTC may appoint a trustee to negotiate an access agreement on AOL Time Warner’s behalf.  Access agreements are subject to a “most favored nation” clause if other ISPs get subsequently a better deal.  The company may not discriminate against content provider by ISPs.  AOL must continue marketing DSL service by telephone companies in cites where Time Warner owns cable lines, on the same conditions as AOL offers in non-Time Warner cites.  Time Warner must be available to Internet and Interactive TV competitors on reasonable terms. Finally, AOL had to open its vast instant messaging community to access by other IM providers for the next version of advanced IM. These were fairly far-reaching conditions.  Once on a roll, the FCC also initiated a proceeding to determine whether cable TV companies have the power to harm competing suppliers of interactive TV.

 

4.5       Mobile interconnection

Cellular telephony was a major improvement in mobile communication because it can reuse the same frequencies in multiple geographical areas, called cells.  Each cell is connected to a central serving point and from the traffic is routed into a LEC, an IXC, or another carrier. The wireless carrier has to pay for its part of the traffic that is routed onto the landline fixed carrier’s transmission facilities. Traffic also flows in the opposite direction, from fixed into the mobile network, though at a lesser rate.

In the U.S., the FCC left the terms and conditions of mobile-fixed interconnection to the states, and to direct negotiations between the parties. Interconnection, in some instances, state regulation fixed many of these terms with a "standard contract."

The flow of traffic from mobile systems to fixed-link systems is significantly higher than the reverse flow.  Thus, the balance-of-traffic is skewed.  This has led to asymmetric interconnection charges that are higher for call imports by mobile networks than for exports (Cave, 1994). The issue pivots on who pays for an incoming wireless call.  In most countries the calling party pays (CPP).  In the U.S., it has long been the receiving party that pays (RPP).  CPP makes it cheaper to initiate calls to mobile phones, and to use prepaid cards, thus increasing volume.  Yet, in a CPP system, LEC customers might be confronted with bills for calls they imagined to be local but which were, in fact, calls to a mobile customer with higher per-minute charges. According to an ITU study, for European fixed-to-mobile network interconnection, the average per minute interconnection charge fixed-mobile was US$.21 per minute in 2000. Actual cost of mobile transmission, while not low due to the opportunity cost of the spectrum and other factors, is below that number. The charge in the opposite direction, from mobile into fixed, was much lower, by a ratio of 20:1 (Melody and Samarajiva, 2000). In contrast, in the US with its RPP system, per minute access charges from fixed to mobile were only $0.003 per minute. The CPP system leaves the caller to the pricing policy of the mobile operator, with no competitors available to reach the mobile user, because that choice is made by the non-paying party, the recipient. That user could be compensated by low prices to let those who call him be charged at a high rate.  These dynamics have led to some advocacy of price regulations of mobile termination. However, as mobile connectivity becomes pervasive and competitive, it is likely that mobile carriers will offer flat rated mobile termination and origination arrangements that would reduce this problem.

 

4.6       Internet interconnection

Interconnection issues have also emerged for the Internet.  Its dynamics provide insights for the question whether interconnection would be offered in a competitive market, and on what terms.   Since the Internet is a loose federation of autonomous networks, interconnection and access issues abound.  The first phase of the Internet in the US was government dominated through provision of the first backbones, initially by the Defense Department ARPANET, then with the NSFNET backbone.  In time, this created problems in accommodating other backbone networks and routing packets when different backbone alternatives became available.  This presented technical protocol problems as well as economic issues involving financial settlements among networks and ISPs.  Interconnection nodes are known as Network Access Points or NAPs.  For a single fee an ISP can access, at the NAP, the other backbones present at that location.  Several backbones formed their own commercial interconnection point, the Commercial Internet Exchange (CIX) in Santa Clara in 1991, and agreed on a settlement-free traffic exchange.  CIX could not dominate interconnection.  First, because rival interconnection arrangements existed independently from CIX, such as those for the US government and those of other countries.  Alternative commercial interconnection points also emerged in the United States.  On the East Coast, the Metropolitan Area Exchange-East (MAE-East), owned by the UUNET (and acquired by WorldCom), was created to provide bilateral interconnection arrangements among major backbones.  No uniform multilateral agreement and no settlement payments exist.  Instead, the ISPs negotiate agreements with each other. 

Even with such competition in interconnection, the existing system of multichannel-backbone “public” meeting point often exhibited bottleneck characteristics.  Many ISPs and backbones therefore shifted to direct “private” peering and regional arrangements.

Similar arrangements exist in many countries.  In some countries the function of NAPs is not only technical but also regulatory.  NAPs are used for the control of content in countries such as China and Saudi Arabia which limit their citizens’ access to foreign information.  Thus interconnection can become a competitive service, but it can also be a tool for control.

Key features of peering arrangements are among ISPs that each Tier 1 “core” ISP has a separate interconnection arrangement with each of the other Tier 1 ISPs, and that such 1 ISP accepts traffic destined to one of its customers but does not accept transit traffic destined to another Tier 1 ISP’s customer.  This reduces competition.  ISPs in peering relationships most typically engaged in “bill-and-keep” rather than make settlement payments to one another (Lehr and Weiss, 1996).  However, it is difficult for any ISP to assure that its partners do not “dump” traffic to it as a default route (Baake and Wichmann, 1999).  If settlements (i.e., payments) do not take place, each ISP has an incentive to use the other’s backbone rather than its own, known as the “hot potato” strategy.  Peering arrangements of large backbones with smaller ISPs require financial payments, and have at times been used to put financial pressure on smaller competitors and requirements to reduce arbitrage.

Internet interconnectivity is a case of largely unregulated service providers that are partly competing and partly collaborating.  Do they voluntarily provide interconnection to each other?  The answer is yes.  They do so because of the inherent externality advantages to their members in having a larger number of network participants.  Several models of interconnection emerged, including provision by third parties who offer interconnection as their business.  Thus competition in the operation of interconnection is likely. On the other hand, the interconnection arrangement among backbone providers demonstrates their common incentives to eliminate arbitrage by resellers and to restrict smaller competitors. 

Thus interconnection, as an essential service, is subject to similar dynamics as other important goods.  Its distribution becomes subject to attempts at raising entry barriers by major industry players seeking to limit and stabilize an industry and pricing structures whose fixed costs are high and marginal costs low.  Here, too, control over interconnection becomes control over the market.

 

5                                 Pricing and pricing wars

 

Interconnection comes with a variety of controversies, none more contentious than its price. The setting of interconnection charges can be used as a tool by regulators to finance unrelated policy goals, by incumbents to frustrate competition, and by entrants to gain a subsidy. The challenge for regulators is to set prices for an intermediate good –interconnection -- in a way that encourages an efficient competitive entry and avoids an inefficient one.  Setting interconnection rates provides wide margins for economics and politics. These prices can determine industry structure and network architecture (Vogelsang and Mitchell, 1997) range of pricing models exists and several will be described in the following.

 

5.1       Regulated pricing of interconnection

5.1.1    Zero‑charge (bill‑and‑keep) and lump-sum payments

Two networks might agree to a zero-charge where traffic and costs between the entrant and the incumbent are balanced, and where it therefore would be administratively easier to impose no charge.  Since costs are not passed on, each carrier has an economic incentive to increase the efficiency of its own network.  At the same time, each firm will also try to maximize its outgoing calls in relation to the incoming traffic to divert its traffic to the other carrier as soon as possible, and to discourage usage of its own network. Bill-and-keep also has problems when originating and terminating usage proves unbalanced.  In that case, carriers with disproportionately large originating usage will get to keep the most of the revenues, even as they impose the largest cost border on carrier interconnecting into them.  A bill-and-keeps arrangement was the initial arrangement among commercial Internet service providers each backbone provider trying to unload its traffic and associated costs as quickly as possible on its partners, and to provide them with poor service to discourage them from doing the same. Bill-and-keep was advocated as an efficient regime for interconnection pricing (Brock, 1995). For interconnected calls, the calling party would bear the cost of delivery to the called party’s central office, and would not have to pay termination from there (deGraba, 2001). This system would reduce the problem of local terminating monopoly by eliminating the incentive to charge high interconnection prices, since none exist. It also eliminates the discrepancy in treatment of access charges, in the US, between ISPs and IXCs

A related system is to charge a fixed (lump) payment for access by a carrier. Such a fee can vary with the capacity provided. Lump sum fees do not affect the marginal behavior. But they, too, lead to wasteful usage where marginal costs are non-zero, and they are advantageous to the heaviest of users.  At the same time, a high lump‑sum charge could lead to incentives for interconnectors to seek a bypass solely to avoid the payment.

 

5.1.2    Average cost pricing

All cost-based pricing methods have the administrative advantage that they require only information about the providing network and not about users, uses, and interconnectors.  Interconnection prices can be set at the average cost of providing such interconnection.  The average cost can be based on actual historic or on a hypothetical future-looking costs.  This approach incurs therefore has many of the same practical problems associated with marginal cost pricing, (discussed below) though it is generally easier to determine total costs than marginal ones. The problem with the average costs approach is that they only represent the mean cost of capacity usage. They, therefore, do not reflect cost variations across a given time period, notably cost difference between peak and non‑peak usage. Average cost in a capital-intensive industry will also be usually above marginal cost, and such a price would deter entry by interconnection.  Another problem is the conceptual and practical application of average cost pricing to multi-product outputs.

 

5.1.3    Fully distributed cost pricing and two-part tariffs

Fully distributed cost pricing (FDC) tries to combine the economic incentives of marginal cost pricing with a way to cover the fixed costs.  In theory, it combines all costs common—fixed and incremental—and allocates them to different services—such as to local and long-distance; residential and business—according to a formula.  The cost per unit for service related items is usually based on telephone message minutes or message minute miles. For many years, FDC was the dominant method of step-by-step regulation.

As with average cost, a basic defect of fully distributed cost is that it exceeds marginal costs, and therefore fails to measure the costs causally imposed (Kahn, 1988).  This may encourage entry by firms whose long run marginal costs are higher than those of the incumbent (Braeutigam, 1980).  Another problem is that the allocation of the joint costs of an incumbent in a multiproduct setting is essentially done on a subjective basis.  This leads to protracted disputes over the appropriate allocation of joint and common costs.

A two-part tariff combines a flat and a variable charge. It can be composed of a variable usage charge on top of a flat capacity charge. The former is supposed to recover all variable costs and is charged on a per‑unit basis. The latter distributes the fixed costs to customers according to their respective basic requirements. Though not using this terminology, this approach has been long used in the US for local access, with a combination of flat and variable charges.

 

5.1.4    Price caps

Interconnection prices can also be set by price caps.  In principle, such cap can be tacked on to any pricing scheme except where such a price is zero. A given interconnection price, however derived, is then indexed to inflation, productivity expectations, and other factors.  A high productivity factor would in time lead to low prices.  A price cap system provides incentives for cost reduction, because a carrier would gain by cutting cost. The productivity factor might turn out to be too generous, or too harsh for an incumbent LEC.  It would be subject to periodic recalibration (with its associated regulatory fight over costs and profits) which means that profit measurements (revenue minus cost) do not really disappear.  But in the short term a price cap system is a quick way to proceed, due to its gradualism and simplicity.

Nothing in telecommunications regulation stays simple for long, however.  Soon, disputes arise over what is included in the "basket" that is subject to price caps, whether there should be different caps to reflect different production trends of various network elements and services, whether consumers should set a “retail cap” different from other companies, and whether new networks services and technologies should be excluded.

5.1.5    Ramsey pricing

This pricing rule recommends that when marginal cost pricing will not recover total cost, to institute a price discrimination in which each customer class is charged a price inversely proportional to its demand elasticity.  The basic intuition behind Ramsey prices is that to allocate fixed costs among all customers, the heaviest burden is put on those customers who want service badly enough so that their behavior will be least affected (Baumol and Bradford, 1970). This welfare optimizing principle is also known as the inverse-elasticity rule.

The problems associated with the Ramsey pricing are several.  On the policy level, it means that customers with fewer options, often smaller users, would be charged highest, hardly a popular undertaking given that it is the reverse of traditional charging system under which such residential users tend to get a price discount.  Furthermore, Ramsey pricing requires the prevention of arbitrage i.e. significant regulatory monitoring and enforcement, because such arbitrage would destroy the ability to price differentiate.  Also, the informational requirement on regulators to set these different prices correctly can be prohibitive.  Not only must marginal costs be known, but information on the elasticities for different customer types must be available as well.  In most cases, information is incomplete or asymmetric.  (Laffont and Tirole, 1994).  This situation can be improved somewhat through the creation of proper incentives on the network provider through “global price caps”.  For more detail, see Mark Armstrong’s contribution in this volume.

 

5.1.6    Wholesale pricing

One can approach the question of pricing interconnection differently.  Interconnection charges cannot be separated from the more general issue of appropriate telephone rates.  One can think of an interconnecting network as a large user.  Should such a user pay a rate that is different for the services offered to other large users by the incumbent carrier? 

Treating competitive networks as each other’s large customers greatly simplifies administrative arrangements.  However, it also creates incentive for incumbents to squeeze their competitors through high conveyance prices, to charge usage prices for services whose costs are not traffic‑sensitive, and to bundle necessary elements with unnecessary ones. To compete, an entrant needs a price below the retail level for the final service. The key to resale is its price. Or, more precisely, the extent of its discount over retail price.   The absolute level of interconnection charge, by itself, is not germane to whether or not a non‑integrated entrant is able to compete with an incumbent, as long as the incumbent has to charge itself the same price (Kahn and Taylor, 1994).  Rather, the ability to compete is based on the margin between the interconnection charge and the incumbent’s final prices, i.e. in the avoidance of a vertical squeeze.  A major problem with the wholesale price approach is that for reasons of public policy, many retail prices in telecommunications are kept low, such as for residential and rural customers.  Hence, even a cost‑based wholesale price might not be low enough to permit the interconnector a retail profit.  On the other hand, a large discount for wholesale prices over retail might 

  1. Require the retail customers of the incumbent to subsidize the entrant and its customers;
  2. Distort the high‑capacity market, since other large users would presumably engage in efforts to obtain the same favorable rates;
  3. Result in arbitrage, leading to lower retail prices, which would reduce the wholesale prices further if they are a set percentage of retail prices, and resulting in a kind of “reverse squeeze” on the incumbent;
  4. Reduce the incentive to improve efficiency if they lead to a reduction in retail prices which would also result in a reduction in wholesale prices; incumbents may then conclude that the best strategy would be to increase retail prices in order to increase wholesale prices. 
  5. Reduce incentives by entrants to construct physical facilities, if the wholesale discount is favorable to their competitors. It has been argued, on the other hand, that resale entry and the intensified competition it engenders are likely to enhance incentives for ILECs to maintain and upgrade their existing network facilities (Beard, et al, 1998).

            Thus, wholesale pricing creates a delicate “knife‑edge” problem.  Set too low, the resale entrant cannot succeed.  Set too high, resale becomes more attractive than physical entry.  That situation also implies a continuance of a regulatory regime to set the discount since, in the absence of infrastructure competition, incumbent LECs would continue to control local distribution.  The knife‑edge dilemma may suggest a strategy of initially setting the discount large enough to encourage entry, but also at the same time establishing a fixed and definite term for its termination or phase-out, i.e., a sunset clause.  The anticipation of the end of the regulated discount would then create incentives for entrants to construct alternative physical facilities.

            Another issue is that if the wholesale price is low, it would be then the advantage of large users to avail themselves of that price.  Hence, wholesale prices are generally made available only to telecommunications carriers, though the meaning of that term is not always clear.  Universities provide telecom services to their students and faculty, and landlords furnish service to their tenants.  To gain the advantages of interconnection prices hence requires some official recognition as a telecommunications provider.

While there is frequent agreement on the need to provide a discount to inteconnecting wholesale services, the extent of the discount is usually disputed.  Interconnectors and resellers advocate a substantial discount as the quickest method of encouraging competition. A discount that does not permit viable competition should therefore be presumed unreasonable. This brings them close to advocating that their survival (or even prosperity) is the test of a reasonable discount.  Physical carriers, on the other hand, such as incumbent LECs, but also their infrastructure competitors such as cable TV companies, facility-based CLECs, and IXCs, favor low discounts since substantial discounts would provide disincentives for entrants to construct its own facilities and thus discourages facilities‑based competition (Kaserman and Mayo, 1997).  The measure for this wholesale discount has been “avoided cost.” The ascertainment of such cost inevitably leads to regulatory and conceptual disputes. In the US, the FCC bootstrapping itself to state determinations, come up with a default range of 17-25 percent. 

 

5.1.7    Efficient component pricing

An influential option for access pricing is the so‑called Efficient Component Pricing (ECP) Rule.  Under this option, aside from the direct incremental costs incurred by the incumbent in providing access, the entrant is required to compensate the incumbent for the loss of net revenue that its entry may cause. The interconnection price for an entrant's call would then be the average incremental cost, including all relevant incremental opportunity costs (Baumol and Sidak, 1994). 

Advocates claim that ECP would improve or at least maintain current welfare through its four main properties.

      (1) Only interconnectors with lower incremental costs would be willing to enter the market.

      (2) New entrants would not affect incumbent’s revenues.

      (3) It does not interfere with cross‑subsidization (desired or otherwise).

      (4) It eliminates an entry barrier since incumbents would have no incentive to keep rivals out.   

            According to incumbent LECs, ECP provides an approach that would closely parallel the methodology that a firm would employ in an environment when it sells its inputs to a firm that intends to compete in the final product market.

Criticism to this approach has been quite strong. It was described, among others as guaranteeing monopolists their profits against competitive losses; assuming zero-sum traffic; neglecting dynamic efficiency gains and neglecting positive network externalities bestowed by entrants

In response, Baumol and his co-authors agreed that, by itself, ECP will not result in a competitive pricing structure. In particular, initial charges must be constrained by market forces or regulation. Only if such condition was met would a “level playing field” result and welfare gains be realized. ECP is not designed to do away with monopoly profits, and the problem lies in the incumbent being permitted to charge monopoly prices in the first place.  Baumol assumes either a perfectly regulated market or   “a perfectly contestable market.”

            Contestability means that competitors' could enter at will if service was priced above competitive level by the incumbent.  However, the competitors’ ability to enter is precisely the issue in interconnection.  If they could enter, Baumol’s entire analysis is pointless, because market transactions would determine interconnection prices.  Interconnection pricing is a policy issue precisely because markets are not contestable. 

            In the real world, the weight of ECP was bolstered after a distinguished court upheld in 1994 Telecom New Zealand’s opportunity-cost based charges. In the Judgments of the Lords of the Judicial Committee of the Privy Council, the commonwealth's highest tribunal, separated two issues: monopoly rents, and abusive conduct. Even though it agreed partly with the New Zealand Court of Appeal on the matter of monopoly rents, its decision was that ECP could not be considered an abusive pricing principle.

      "... Their Lordships are of the view that, apart from the risk of monopoly rents, the Baumol Willig Rule does provide a proper model for demonstrating what would be charged by the hypothetical supplier in a perfectly contestable market.

 

Yet for the UK itself, the regulatory agency OFTEL found several disadvantages with ECP, especially where economies of scope exist for long distance and local services, and where monopoly pricing is therefore likely, ECP would require an entrant to have its full costs lower than the incremental cost of the incumbent. OFTEL therefore concluded that ECP did not support the major objective of its interconnection framework, namely, to help the market move towards a more competitive environment.

          In the United States, too, the FCC explicitly rejected the ECP framework with respect to interconnection and access to unbundled network elements, noting that the ECP methodology “precludes the opportunity to obtain the advantage of a dynamically competitive marketplace” (FCC, 1996). The FCC concluded that ECP does not replicate a competitive environment but rather (1) continues the inefficient and anti-competitive aspects of the existing price structure; (2) distorts competition by providing incentives for incumbent LECs to shift costs of competitive services to bottleneck services; and (3) preserves the status quo and serves as a barrier to entry.

 

5.1.8    Marginal cost pricing

In a fully competitive market, long-term marginal cost pricing is optimal in terms of economic efficiency.  Marginal cost is defined as the cost of producing one more unit of output.  The cost of increasing output by a given quantity is incremental cost, which approximates marginal cost if averaged across the increment.  When applied to interconnection, incremental cost pricing will recover the additional costs the incumbent incurred by as a result of interconnection.

While the principle of long‑run marginal cost pricing is simple, its conceptual and practical details are not.  What is the long‑term in a dynamic industry?  Over what volume is it distributed?  What are costs in a monopoly environment if a monopolist is inefficient? What is included in cost, given the frequent incentives to shift costs from competitive lines of business to price‑regulated ones?  What are the costs properly attributable to one line of business when some costs are “joint and common”?  Should marginal cost include the opportunity cost of foregone profits to the incumbent due to the interconnection? 

A further question is whether to set long run incremental cost based on actual (“historic”) costs, or present costs, or even future costs.  Historic costs, also referred to as embedded costs, are the actual cost incurred to build the network.  On the other hand, future (“forward looking”) cost methodologies do not involve the use of an embedded rate base, but rather postulates a hypothetical network based on near-term best‑practice technology and efficient engineering.  The regulatory search for the proper marginal cost led to long‑run incremental cost (LRIC) methodologies. The US variant is known as TELRIC, the European is as FL-LRAIC and the Japanese LRIC.  TELRIC the cost basis is not the telecommunication service (for example, interstate access service) but rather the unbundled telecommunication elements that are used to provide the service (for example, for local loop or the local switch).  

Incumbent LECs generally argue that future costs should be based on each incumbent’s existing network technology, not on some idealized least‑cost, most efficient network that may bear no relationship to existing operations.  Furthermore, they argue that competitors would not invest in their own facilities if the alternative is an LEC element at a regulated price that is no higher than the least cost, most efficient provider.  Entrants, on the other hand, prefer forward-looking prices at least as long as costs are declining over time. They contend that, in a competitive market, the most efficient provider will set the prices, regardless of such as cost-by the incumbent. Also, pricing based on the most efficient technology prevents cross subsidies by the incumbent LEC, e.g., their charging the cost of facilities used to compete in other markets. In addition, entrants would be able to share in the economies of scale and scope of the incumbent, which limits the strategies available to the incumbent in preventing entry.

Forward‑looking cost methodologies attracted criticism. Setting prices equal to the forward looking cost will in many cases not allow the recovery of total costs, i.e., total revenues will fall short of total costs, especially with the transition from the previous system that tended to under-depreciate.  Furthermore, even while average prices would approach cost, this does not necessarily force each individual element's price towards cost.  There would still be varying price‑cost margins among different customers and services. Incumbent LECs also argued that past investments, including inefficient ones, were frequently a direct result of regulatory requirements and that they therefore should be entitled to recovery. 

In response, proponents of TELRIC argue that the price-cap recovery of embedded costs will lead to higher prices and result in the entrants over‑building new capacities instead of maximizing the use of existing facilities. To this, incumbent LECs respond that the exclusion of embedded costs in the rate structure is confiscatory in nature and thus a “taking” of property. 

TELRIC has also been criticized by proponents of "real options" theory.  Their basic critique has been that the FCC's pricing rules for unbundled network elements leads to prices that are too low.  They contend that incumbents are required by regulation to give a free option to entrants, the option being the right but not the obligation to get the unbundled element (Hausman, 2000).

 

5.2       Arbitrage

One can set interconnection prices administratively or by market forces. The former is intrusive, while the latter requires competition and the absence of market power. Where competition is partial, its role can be extended through arbitrage and benchmarking, such as  “Third Party Neutrality.”  As long as some local network segments competitive, the access charge system could be simplified. Network could charge any price and select subject only to the following principles:

1.                  Networks cannot prevent arbitrage by discriminating against their customers’ customers.

2.                  The prices for monopolistic network segments where competition exists would be based on those prevailing for similar competitive segments (i.e., on benchmarks).

3.                  These are simple rules. The second would wither away with competition. The first would join various similar rules in commercial transaction, such as the holder-in-due-course doctrine, whose purpose is to facilitate transactions in the economy.

 

5.3       Incremental cost

It is one thing to speak in the abstract of total cost, historic cost, average cost, and so on.

It is quite another matter to define and measure them. In the market, cost need not be demonstrated, only acted upon.  However, when, as is more likely, an interconnection charge is either regulated or its feasibility is subject to appeal before a regulatory agency cost needs to be defined and justified.

Incumbent network operators have strong incentives to assign high portions of joint costs to interconnection services. The problems for the regulator are therefore to identify which costs are relevant to interconnection, how large they are, and how to allocate them. Economically efficient cost allocation would suggest that the costs of services should be borne by those who cause them.  But as applied to the pricing of interconnection, it can be difficult to operationalize this principle.  This is due to conceptual questions – which costs should properly be included? ‑‑ as well as for factual reasons ‑‑ what are the figures, and are they reasonable?  The problem is further complicated insofar as companies are asked to disclose proprietary cost information that competitors may find useful.

The tracking of cost information is difficult.  The integrated structure of carriers, together with the complexities of joint and common cost allocation, make it hard to determine cross‑subsidies and cost‑ shifting.  This is compounded where equipment, facilities and manpower is shared by the regulated as well as the unregulated operations.  In the United States, the FCC and the state commissions therefore implemented a complex and regulated set of structural and nonstructural accounting rules that try to ensure a proper allocation of costs and revenues.   This is the Uniform System of Accounts.  All regulated telecommunications firms have to comply with the USOA’s basic accounting rules.  Accurate records must be kept for revenues, operating costs, depreciation expenses, and investment in plant and equipment. And a distinction must be made between regulated and nonregulated businesses.

            Costs have been divided into traffic‑sensitive (TS) and non‑traffic‑sensitive (NTS) costs. The fixed NTS costs comprise, among others, customer premises equipment (CPE), inside wiring, the local line, maintenance costs and a portion of local exchange switching equipment. The variable TS costs cover switching and trunking plant and equipment that is shared by all users.  The methodology distinguishes among nine separate variable cost elements, such as line termination functions in the LEC’s end office, traffic sensitive switching equipment, directory assistance, interexchange facilities, and common transport facilities.  This data is then provided on the FCC’s Automatic Reporting Management Information System (ARMIS).

As if this is not enough, an additional critical issue exists in the United States, in the “jurisdictional separation” of inter‑ and intrastate costs and revenues (NARUC, 1971).  Costs assigned to the interstate jurisdiction are recovered from interstate service such as access charges, (i.e., the connection between local and long distance services).  But what is that interstate share, given the integrated nature of a communications network and its use for both long‑distance and local calls?  After much dispute, the overall interstate share of the larger LECs switched network costs was fixed, by a compromise that was political rather than economic or technological, at 25 percent, even though total interstate switched traffic was only 14 percent of all switched traffic.

            A second major approach to determine cost is to skip actual cost measurements for generalized ones.  Instead of determining what cost had been, one calculates what it should be, looking forward.  These numbers can either derive from armchair analysis, but more likely originate from “proxy” engineering models.  These models, initially designed in the United States to determine the appropriate interstate universal service burdens and allocations generally use geographically based units to determine the standard costs of serving customers within a set area.  Not only must the area be representative in terms of terrain and customer dispersion, but it must also approximate the size in which engineers would be comfortable in making provisions for equipment. For example, one of the proxy models in the United States, the Benchmark Cost Model (BCM), that was supported by several ILECs, looked for the average costs required to serve residential customers within census block groups generally containing 250 to 550 households which were assumed to be evenly distributed. 

            Various approaches have various financial assumptions.  For example, the HAI Model, supported by several interconnectors, depreciates equipment over 18 years, with the cost of equity at 11.25 percent, whereas the Cost Proxy Model (CPM) supported by other ILECs uses a 12-year cost model depreciable life.

            The FCC developed a hybrid proxy based on the outputs of several models proposed to it, along with a local loop design and clustering algorithm developed by its own staff.  This hybrid proxy cost model was adopted in 1998 (FCC, 1998), (Sharkey et al, 1992), (Sharkey, 2001)

            The proxy cost models have also been used, with modifications, to determine the cost of the unbundled network elements (UNEs), a task delegated by the FCC to the state utility commissions, under the condition of utilizing a forward-looking cost methodology, an economics-based rather than regulatory depreciation, and an risk-appropriate cost of capital.  The state-determined UNE cost over a wide range (10:1), suggesting that any methodology can be fitted to the policy goals. A major source of differentiation was the treatment of ‘non-recurring cost,” which incumbents had incentives to inflate. Engineering-based cost models, despite their scientific claims, are only as credible as their underlying assumptions.  Supporters of the different proxy models advocate a model that favors their own interests.  In the process of balancing pressures and adding “realism”, models have lost transparency to users and policy makers.  They acquired the unmistakable flavor of an administrative tool of a planned economy, even if they were adopted in pursuit of market competition.

How does all this add up? Economic analysis, such as Mark Armstrong’s companion piece in this volume, can throw light on the consequences of several of these approaches. But in another sense, the vigorous discussion over pricing principles also shows that while ideas matter to the world, the world matters even more to the visibility of ideas.  In telecommunications, these ideas materially affect interconnection charges paid by some companies to others.  For some American long-distance companies, these payments used to account for about 40 percent of their overall expenditures; for the local exchange companies, the receipts were over 20 percent of their revenues.  The pricing and costing principles also relate directly to the payments which various companies make towards the financing of the universal service.  The magnitude of that redistributive system has been estimated, depending on definition, methodology, and interest, to be anywhere between about $4 and $20 billion. In many cases, the setting of the interconnection charge was a matter of corporate life and death.  Given those stakes, it is not surprising that supportive ideas are in demand by each side and that they receive a wide play by their proponents.

Different economic models lead to different conclusions.  The efficient component-pricing (ECP) rule has been advocated by several distinguished economists. It is advantageous to the incumbent local exchange companies charging high prices. Other pricing models result in low interconnection prices, and are therefore favored by new entrants.  Forward-looking long-run incremental cost (LRIC) is such an approach, and it, too, is supported by equally distinguished scholars.  It is supplemented by a planned-economy style, engineering-based proxy cost models that are advanced by the staunchest advocates of free markets. Various experts are lining up before the regulatory decision makers, brandishing competing theories with well-compensated passion.  Who is right?  Perhaps a better question is what the policy goal is.

Thus, when the policy goal is to expand basic telephone service or to keep basic telephone prices low, regulators will be supportive of the incumbents as long as these recycle their gains into wide and affordable connectivity.  In that situation, the cost models picked will tend to be along the lines of efficient component pricing or of distributed cost pricing.  Where large customers are to be favored, Ramsey pricing provides an efficiency rationale.  With vigorous local competition as the goal, regulators have adopted marginal cost models whose fundamental advantage to entrants is that they are lower in price by reducing or postponing their contribution to fixed costs. And when regulators have tried to further accelerate the pace of entry into local competition, they extended this approach into cost that is forward-looking.  There are some good theoretical arguments for such a methodology, but it is doubtful that this approach would have been chosen if the price would not trend conveniently down, but was instead going up and thereby slowing down entry.

It would be easy to conclude that the carriers of ideas are merely the champions of the various carriers of transmission.  Yet, if we measure new concepts only by the yardstick of cui bono, debates over ideas would be pointless.  Out of thesis and antithesis, however motivated, a higher form of understanding emerges.

 

6                                 Interconnection around the world

 

A wide range of approaches to interconnection exists within different countries and among them.  Yet most issues are similar and inclusive:  the impact of emerging network competition the pricing incumbent operators; the determination of the true cost of the underlying infrastructure; the assurance of transparency in incumbent operation; the striking of a balance between general government regulation and competitors freedom to negotiate their own deals; the sharing of the burden of universal service obligations; the symmetry of payment obligations; the promotion of entrants in order to protect competition, yet without providing them with guarantees of survival.  While the basic issues are common, different countries are moving at varying speeds and with different emphasis on competing policy goals.  This is based on history, politics, economics, the state of the evolution of the public network, and the development of an information-based economy.  Several countries have taken the lead in pushing the envelope of activist interconnection policy. They tend to be those countries with the longest experience in telecommunications liberalization.

In the process of moving ahead at different rates of change, divergent policies are creating international tensions.  The interconnection of national networks with each other, an unusually harmonious sector for international cooperation since the middle of the 19th Century, has become a source of struggle.  For a long time, international institutions provided the mechanism for harmonizing restrictive cartel behavior in interconnection.  But no longer.

The International Telecommunication Union was created, to a significant extent, to manage international interconnection arrangements.  For many decades it was supportive of a system of an international cartel of national monopolies.  It firmly opposed, for example, the interconnection of leased lines with public networks, as well as the resale and sharing of leased line capacity. In the 1990s, however, the ITU process became more accepting of privatization, liberalization and competition. 

Similarly, Intelsat, the major international satellite consortium, had embodied the collaborative international system, with all of its strengths and weaknesses.  For a long time Intelsat had legal exclusivity over international civilian satellite communications.  Intelsat was used to operate a global gatekeeper system.  Interconnection into Intelsat infrastructure was granted only to a single entity per country, usually the monopoly incumbent operators.  No other carrier could interconnect into Intelsat, and only official carriers could interconnect into the designated carriers.  The notion of a large user uplinking directly to Intelsat was sheer heresy.  The purpose of this arrangement was to tightly control international satellite traffic, especially in its economic dimensions.  International calls were enormously profitable, but prices could not be maintained with competition.  This indeed happened in the 1990s when Intelsat’s hold over international traffic collapsed.  In consequence, Intelsat shifted its rate and became a market participant rather than cartel enforcer. 

In Europe, the EU Commission pursued two goals, sometimes at contention with each other: the harmonization of European rules of telecommunications; and the liberalization of telecom markets. Liberalization provided the banner of substance but harmonization, under control of Brussels, was also the bureaucratic agenda. The principles for harmonization in domestic interconnection was laid out in the 1990 ONP (Open Network Provision) Framework Directive (1990), and fleshed out in the subsequent ONP voice telephony directive. Interconnection and its pricing were the most controversial issues in that directive.  The traditional monopoly PTOs demanded contribution for their infrastructure investments through recovery of historic cost, while the inter-connectors advocated a nondiscriminatory and transparent “equivalent interconnection.”  The EU Commission fudged this, giving leeway to national governments for the actual details of interconnection, access charges, and dispute arbitration.  The Commission’s role was therefore to regulate regulators rather than market participants (Austin, 1994).

Unfortunately, the ONP voice telephony directive had the dubious distinction of being the very first piece of legislation rejected by the European Parliament under its newly acquired powers of co‑decision created by the Maastricht Treaty.  The directive was rejected by the European Parliament, which found a lack of consumer protection provisions and weak transparency requirements.  Soon thereafter, the European Parliament adopted an updated proposal (1995).

Telecom companies which account for over 25 percent of national markets are presumed to have sufficient market power (SMP) and must fulfill certain obligations.  These include meeting all reasonable requests for access to their networks, nondiscrimination, unbundling, publication of a reference interconnection offer (including price lists), cost oriented tariffs, and a transparent accounting systems, including accounting separation in some cases.  Such accounting separations require operators to implement appropriate cost allocation methodologies, with a preference for long-run cost methodologies. National regulators can intervene in disputes and inspect any agreement. 

The Commission also adopted a recommendation on interconnection pricing which supports the eventual use of a forward-looking long-run average incremental cost model (FL-LRAIC), with “best current practice” charges in the interim.

Once Europe, the United States, Japan, and several other major countries had moved to a system of an interconnected network of networks, others followed.  In 1997 the member countries of the World Trade Organization (WTO) concluded a multilateral deal aimed at liberating international trade in basic telecommunications services, with direct impact on interconnection.  The deal (technically not an “agreement”) comprised of 55 schedules covering 69 governments, of legally binding commitments to open their basic telecommunications markets to foreign competition.  The national schedules are an integral part of a larger treaty, the General Agreement on Trade in Services (GATS).

Telecommunications services were one of the first sectors negotiated under the new WTO regime.  A major policy breakthrough in the process was an agreement to include an American-authored Reference Paper on regulatory reform.  The paper had direct relevance to interconnection issues.  It laid out several key principles. Incumbent network operators must provide market entrants with interconnection at any technically feasible point in the network.  Such interconnection must be provided under nondiscriminatory terms, conditions and rates, and should be of a quality no less favorable that the provider gives its own services.  Moreover, interconnection rates must be cost-oriented, transparent, and where economically feasible, unbundled.  A dispute mechanism administered by an independent body was also called for.

The agreement was s significant because it institutionalized for the first time a system of multilateral mutual surveillance, a framework for bilateral bargaining over market entry.  But, reality may disappoint.  There is indeed an agreed-upon time schedule, but it applies only to the WTO segment of any dispute.  The WTO is an organization of governments, not a civil court.  A firm must convince its national governments to back its claim and present it to the other governments and the WTO arbitration panel.

In the past the model of international institutional cooperation aimed to protect national monopoly systems, and was exemplified by the “old” ITU and Intelsat.  Their interconnection policies were squarely aimed to preclude competitive entry.  But by the end of the century, the international institutions of telecommunications had moved to a liberalizing role, with more open interconnection arrangements at its center.  In these bodies, interconnection policy had proceeded to its second stage, that of opening national and international markets.

 

 

7                                 Interconnection and common carriage

 

What impact does interconnection of networks have on content?   Would information flow with greater ease or with more restrictions?  Two types of legal status apply to various carriers.  The private carriage approach is one of contract and property.  The private owner of an electronic conduit can utilize its capacity as it sees fit, transacting with those partners it wishes to engage with and carrying only the information content it wishes to accept.  In contrast, common carriage has been the underlying system for the flow of information over the telephone network.  It reduced the ability of network providers to discriminate among similar customers, and to select their customers.  It required a carrier to accept any lawful content. 

Precursors to common carriage in telecommunications go back to the Roman Empire and the legal obligations of ship owners, innkeepers and stable keepers.  In England, the early common law placed certain duties on mostly infrastructure businesses which were considered "public callings."  "Common" in that context meant "open to serving the general public" or "general".

            For centuries, common carriage principles have played an important role in the infrastructure services of transportation and communications.  This system was created to assure that all customers seeking an essential infrastructure service and willing and able to pay the established price would not be denied lawful use of the service nor otherwise be discriminated against.  For one hundred years these principles have facilitated telecommunications users' access to the public networks.  Carriers were severely limited in selecting their customers and their usage of the network, including content types as long as they are lawful. A private telecommunications carrier obtained certain benefits, including limited liability for the consequences of its own actions.  It also often received, by statute, powers of eminent domain, use of public rights-of-way, and even protection against competition. Today, as networks interconnect physically, functionally, and financially, they must also converge in terms of the rules under which content flows over them.  While the old system of segregated networks made it possible to segregate content flow rules, this is becoming difficult as network interconnect.  Starting in the 1980s, telephone common carriers, reacting primarily to outside pressures and concerns about their corporate image, attempted to ban or restrict sex lines based on content, even where the messages were legal. One important aspect of common carriage is that it facilitates interconnection and hence competition.  This is because under common carriage, access is provided to all customers, even where they are the competitors of the carrier.  This reduces the entry barriers for competitors, since they can utilize elements of the common carrier.  For example, under this principle, MCI was able to reach its customers over AT&T's local networks rather than having to first build its own local distribution facilities.  Hence, common carriage reduces monopoly power, though its availability may also reduce the incentives to competitive entry or to upgrades.

      With the advent of interconnectivity the demise of common carriage becomes a distinct possibility.  There are several reasons a common carrier cannot use differentiated pricing in the same way that a private contract carrier can because it cannot prevent arbitrage. A private contract carrier can pick customers. If interconnection links these two legal regimes, what will be the impact? In head-to-head competition, the restrictions on common carriers disadvantage them against private carriers, all other things equal.  Common carriers qua carriers will not become extinct, but they will increasingly conduct their business as private carriers, and common carriage as such will disappear over time.  This will not happen overnight, of course.  But the basic dynamics will sooner or later assert themselves.

A common carrier must serve a private contract carrier, but not vice-versa.  These conceptual discussions have a very practical dimension, which revealed itself as cable TV companies moved beyond their traditional activities.  In 1999, when AT&T bought the cable TV giant TCI, it also acquired TCI’s @Home, a cable-based ISP and on-line content provider using high speed cable modem access to @Home’s proprietary content as well as to the Internet.  The ISP industry, led by AOL protested the arrangement.  The problem was that TCI bundled its cable service with @Home, and cable subscribers who wanted to access other ISP and content services from another provider would have to pay an extra fee, making them uncompetitive with @Home.  The rival ISPs therefore asked the FCC to require an equal access arrangement from AT&T/TCI for any ISP that desired such access.  In effect (though not in these words), they asked the FCC to impose a quasi-common carrier status on cable carriers, or at least on their packet transmission.  Such a policy would inevitably also apply to cable-based telephony.   The ISP problem was upheld by several lower courts. An appellate court overturned in 2000; yet AT&T won on jurisdictional grounds, but it could hardly been happy about its victory, because the court classified as Internet access as a “telecommunications service” subject to federal regulation, which meant that it was potentially subject to all the regulations aimed at telephone companies, such as unbundling and access requirements. Somewhat reluctantly, the FCC initiated proceedings.

The demise of common carriage raises the question of impaction on the free flow of information when any carrier in a chin of transmission could impose its own standard of acceptable content. One alternative approach to the regulatory requirement of Common Carriage is arbitrage, “Third party neutrality” (TPN) as a substitute for common carriage.  It imposes no common carriage obligation on any carrier.  However, if a carrier interconnects another carrier or user, it cannot restrict them from contracting with third parties.  The carriers cannot discriminate against its customers’ customers.  This, in effect, creates arbitrage in content access, and makes it difficult to discriminate in price or through selective interconnection.  Third- party neutrality thus ensures a non-discriminatory flow of information in an environment where carriers can contract freely, as long as there is at least some access node that is not controlled by a carrier.

 

8                                 The future of regulation of interconnection

 

Our discussion has taken us from history -- where we identified control over interconnection as the key lever in the traditional establishment of monopoly; to policy, with interconnection as the battleground for creating and blocking competition; to international relations, where interconnection has become a global issue; to economics, where the pricing and costing of interconnection are matters of survival. 

The remaining question is whether in a fully competitive environment, any residual interconnection rules are needed, such as requirements to interconnect or to avoid discrimination.

            The issue is not negative but positive.  Regulation exists not because of bureaucrats who cannot let go.  It is created largely as a political response to interest groups.  These interest groups will not disappear and new ones will emerge. In a democratic political system a majority coalition always wants something from the minority.  In telecommunications, this has been the underpinning of such policies as universal service, rate averaging, high-cost area redistribution, and so forth.  With telecommunications becoming ever more important, not having full connectivity to the new and powerful means of communication becomes a major disadvantage.  That is why, inevitably, the definition of universal service will expand, even as competition makes basic services more efficient.  The introduction in the US, of a favorable “e-rate” for Internet access by schools, libraries, and hospitals, is an early example.  Other issues are consumer protection, privacy, or rural service.

            Throughout this article, we have described the changing use of interconnection.  In its first stage, it was pro- incumbent, and used to establish monopoly control.  Much later, the goal was pro-competition, and interconnection was used to pry open the network environment to new entrants.  When local competition was slow in letting entrants to catch up with the entrenched incumbent, interconnection policy moved to its third stage, in which it aimed at market-control. 

            In most countries, even after a number of years of competitive entry, the incumbent still is dominant in most traditional market segments. Supportive interconnection policies can accelerate competition.  Yet it becomes a problem if they are open-ended and without explicit phase-out. In that case, it will be difficult to terminate the rules in the future because entire clusters of firms will be dependent on them for survival.  Even if some of the entrants became vigorous enough to be able to compete without regulatory help and to negotiate non-coercive interconnection agreements, there will always be competitors on the margin who would not survive the abolition of supportive interconnection rules.  Their impending demise would then create desperate fights to maintain the rules, and to public arguments that their abolition would reduce competition by reducing the number of competitors. 

            Hence, it becomes more difficult to exit interconnection regulation than to enter it, unless the exit scenario is spelled out clearly in advance.  This provides incentives to the entrants to plan for a competitive scenario where they are on their own.  Within reason, such clarity is much more important than the length of the transition.

            Entrants do not like specific exit scenarios because they fear that it permits incumbents to engage in dilatory practices.  But that is only true if incumbents can delay with impunity and without cost.  Hence, the exit timing needs to be tied to equally specific schedule for the incumbent to fulfill a set of requirements in the provision of the elements of actual interconnection, such as support systems, number portability, etc.

            If interconnection rules are temporary only, there is less need for them to be overly complex.  In the United States, the battles over the precise nature of interconnection methodology and its prices models have delayed actual local entry, and will continue to do so in the future as the details of the rules will require further clarification, proceedings, regulation, litigation, and legislation. A temporary solution can, on the other hand, then be structured as a relatively simple system.

            The alternative -- interconnection regulation into the distant future without a clear exit scenario – maintains the regulator in a position to establish entitlements, to create client relations by the industry, and to maintain its role of perpetually indispensable arbiter.  This is not a scenario of deregulation and market competition, but of a micro-managed telecommunications market and industry dependency into the distant future.

Furthermore, it is likely that the significance of government’s wielding the tool of interconnection policy will increase rather than decline.  Even if telecommunications markets become more competitive, there will be no shortage of problems beyond bottleneck power, the present rationale.  Consumer protection is one example.  The affordability of certain socially valuable services is another.  To deal with those issues, government requires a policy instrument, and the control over interconnection provides it.  To add to its usefulness, it is largely “off-budget”, requiring no outlays, only rules and mandates.  With those advantages, it is difficult to imagine that interconnection will be withdrawn from the regulatory arsenal.  If anything, its role will grow.

 


Bibliography

            Austin, Marc T. (1994),  ‘Europe's ONP Bargain: What's in it for the Users,’ Telecommunications Policy, 18 (2), 97-113.

 

            Armstrong, Mark, (1998), ‘Network Interconnection in Telecommunications,’ Economic Journal, 108, 545-564.

           

            Armstrong, Mark, Chris Doyle, and John Vickers (1996). ‘The Access Pricing Problems: A Synthesis. Journal of Industrial Economics. 41 (4), 335-360.

 

            Armstrong, Mark (2001), ‘The Theory of Access Pricing and Interconnection’, in this volume.

 

            Baake, Pio and Thorsten, Wichmann (1999), ‘On the Economics of Internet Peering’, Netnomics, 1, 89-105.

           

                Baumol, William J. and David F. Bradford (1970), ‘Optimal Departures from Marginal Cost Pricing’, American Economic Review, 60, 265‑283.

 

            Baumol, William J. and J. Gregory Sidak (1994), ‘The Pricing of Inputs Sold to Competitors,’ Yale Journal on Regulation, 11, 171-202.

 

            Beard, T. Randolph, David Kaserman, and John Mayo (1998), ‘The Role of Resale Entry in Promoting Local Exchange Competition’, Telecommunications Policy, 22 (4/5), 315-326.

 

                Braeutigam, Ronald (1980), ‘An Analysis of Fully Distributed Cost Pricing in Regulated Industries’, Bell Journal of Economics, 11(1), 182‑196.

 

            Brock, Gerald W. (1993), ‘The U.S. Telecommunication Policy Process, A Study in Decentralized Public Decision Making’, mimeo.

 

            Carter v. AT&T, 250 F. Supp. 188 (Northern District Court of Texas 1966).

 

            Cave, Martin (1994), Regulating Service Provision in Mobile Telecommunications: Some Lessons From British Experience’, mimeo.

           

            Economides, Nicholas and Lawrence J. White (1995), ‘Access and Interconnection Pricing: How Efficient is the Efficient Component Pricing Rule?’, The Antitrust Bulletin, 16, 271-284.

 

            FCC (1997), Access Charge Reform, Docket 97-158.

 

FCC (1997), Access Charge Reform Docket No. 96-262, No. 96-45.

 

            FCC (1982), Exchange Network Facilities, Docket No. 78-371, 90 F.C.C.2d.

 

FCC (1967), In re Microwave Communications, Inc. Docket No. 16509, 64 F.C.C. 2d 979.

 

Gabel, David and David F. Weiman (1994), ‘Historical Perspectives on Interconnection between Competing Local Exchange Companies’, Columbia Institute for Tele-Information, Working Paper.

           

            Hausman, Jerry (1999), ‘The Effect of Sunk Costs in Telecommunications Regulation’ in Alleman, James and Eli Noam (eds.), The New Investment Theory of Real Options and its Implications for Telecommunications Economics, Boston, MA: Kluwer Academic Publishers, pp. 191-204.

 

            Huber, Peter W. (1994), ‘Competition and Open Access in the Telecommunications Market of California’, (Unpublished report).

           

            Hush-A-Phone Corp. v. United States, 238 F.2d 266 (District of Columbia Circuit Court 1956).

 

International Telecommunication Union (1995), The Changing Role of Government in an Era of Telecom Deregulation. Interconnection: Regulatory Issues, Geneva, Switzerland: International Telecommunication Union.

 

Kahn, Alfred E. (1988), The Economics of Regulation: Principles and Institutions, Cambridge, MA: MIT Press.

 

Kahn, Alfred E. and William E. Taylor (1994), ‘The Pricing of Inputs Sold to Competitors: A Comment’, Yale Journal on Regulation, 11, 225-227.

 

Kaserman, David, and John Mayo (1997), ‘An Efficient Avoided Cost Pricing Rule for Resale of Local Exchange Telephone Services’, Journal of Regulatory Economics, 11, 91-107.

 

Laffont, Jean‑Jacques and Jean Tirole (1994), ‘Access Pricing and Competition’, European Economic Review, 38, 1673‑1710. 

 

            Laffont, Jean-Jacques and Jean Tirole (1996), ‘Creating Competition Through Interconnection: Theory and Practice,’ Journal of Regulatory Economics. 10, 129-145.

 

Laffont, Jean-Jacques and Jean Tirole (2000), ‘Competition in Telecommunications,’ MIT Press, Cambridge, MA.

 

Lehr, William H. and M. B. H. Weiss (1996), ‘The Political Economy of Congestion Charges and Settlements in Packet Networks’, Telecommunications Policy, 20, 219‑231.

 

            MCI Telecommunications Corp. v. FCC, 561 F.2d 365 (District of Columbia Circuit Court 1977), cert. denied, 434. U.S. 1040.

 

Mueller, Milton (1988), ‘Open Interconnection and the Economics of Networks: An Analysis and Critique of Current Policy’, Columbia Institute for Tele-Information Working Paper Series.

               

            National Association of Regulatory Utility Commissioners (1971), Separations Manual‑Standard Procedures for Separating Telephone Property Costs, Revenues, Expenses, Taxes, and Reserves, Washington, D.C.

 

Noam, Eli (1994), ‘Beyond Liberalization: From the Network of Networks to the System of Systems,’ Telecommunications Policy, 18 (4), 286-294.

 

 

            Noam, Eli (1994),  ‘Beyond Liberalization II: The Impending Doom of Common Carriage,’ Telecommunications Policy, 18(6), 435-452.

 

Noam, Eli (1994), ‘Beyond Liberalization III: Reforming Universal Service,’

Telecommunications Policy, 18 (9), 687-704.

 

            Noam, Eli (2001), Interconnecting the Network of Networks, Cambridge, MA: MIT Press, forthcoming.

 

            Sharkey, William W., ‘Regulation of Technology and Production,’ in this volume.

 

            Sidak, J. Gregory and F. D. Spulber, (1997), ‘Giving, Takings, and the Fallacy of Forward-Looking Costs,’ New York University Law Review, 5, 1068-1164.

 

Telecommunications Act of 1996, 47 U.S.C. §101 et seq., Pub. L. No. 104-104, 110 Stat. 56.

 

            Tye, W., and C. Lapuerta (1996), ‘The Economics of Pricing Networks Interconnection: Theory Application to the Market for Telecommunications in New Zealand,’ Yale Journal on Regulation, 13, 419-500.

 

Vogelsang, Ingo and Bridger M. Mitchell (1997), Telecommunications Competition: The Last Ten Miles, Washington, DC: AEI Press, p. 249.

 

Weare, Christopher (1996), ‘Interconnections: A Contractual Analysis of the Regulation of Bottleneck Telephone Monopolies,’ Industrial and Corporate Change, 5(4), 963-992.

 

Willig, Robert D. (1979), ‘The Theory of Network Access Pricing,’ in Trebing, Harry M. (ed.) Issues in Public Utility Regulation, proceedings of the Institute of Public Utilities Tenth Annual Conference, East Lansing, MI: Michigan State University, pp. 109-152.

Wright, Julian (1999), ‘International Telecommunications, Settlement Rates and the FCC,’ Journal of Regulatory Economics, 9, 71-92.



* The author acknowledges gratefully help by Robert Atkinson