Academic Conferences


Best Scholarly Journals




The Global Business Conference, Istanbul

Istanbul,  Turkey

August 7 - 13, 2002

All submissions are subject to a double blind review process

ISSN 1553 - 5827     *       The Library of Congress, Washington, DC



Main Page   *   Home   *   Submit Paper   *   Registration Policies   *   Tracks   *   Guideline   *   Sample Page

 Editorial Team   *   Previous Journal Issues   *   Publication Ethics   *    Standards for Authors / Editors   *   Editorial Policies

Accepted Papers' List   *   Members / Participating Universities   *    Jaabc Library   *   Journal Subscription   *   Voice of the Editors

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work.  All submissions are subject to a two person blind peer review process.

The Business Review, Cambridge is published two times a year, December and Summer. The e-mail:; Website,  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.  Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

Dancing with an Elephant: Cultural Missteps in Managing a Thai Expatriate

Dr. Charles A. Rarick, Barry University, Miami, FL



Marianne Whitaker is very concerned about the success of one of her new account representatives, Pongpol Chatusipitak, a Thai national who has worked for her at Premuris Investments for only six months. Pongpol does not appear to Marianne to be very motivated and some of his behavior seems odd to her. A decision must be reached concerning his future with the company. The primary subject matter of this case concerns the cross-cultural difficulties found in managing a foreign expatriate in the United States. Issues particularly relevant to cross-cultural difficulties found between Thailand and the United States are emphasized. It was a typically beautiful day in Southern California as Marianne Whitaker peered out her office window at Premuris Investments to the streets below. Marianne was not able to enjoy the scenery, as she was very concerned about the performance of one of her financial services advisors, Pongpol Chatusipitak. Pongpol had been hired six months ago to help generate increased business from the large Thai business community of Southern California. Pongpol had not generated much business in the first few months, but recently his performance had improved. Marianne was also concerned about some of his personal and work behaviors. Marianne felt that she would like to fire Polpong, however, this choice may not be an option. She wondered out loud where she had gone wrong and if anything could be done to improve the situation.  Pongpol Chatusipitak, or “Moo” as he liked to be called, was from Chaing Mai in northern Thailand. Pongpol graduated from Chulalongkorn University in Thailand with a degree in economics. After working for a Thai bank for three years he enrolled in the graduate program at the University of Southern California to study finance. Upon completion of an M.B.A. degree from USC, Pongpol was hired by Premuris as a financial services advisor.   Marianne Whitaker remembers how she was struck by the warm and easygoing nature of Pongpol. He seemed to have a perpetual smile and appeared very conscientious. Pongpol did not have an outstanding academic record at USC, however, Marianne had discounted the importance of grades and was more concerned with what she considered to be a strong work ethic in Asian people. The fact that Pongpol had an M.B.A. from a respected school, and spoke fluent Thai, made him a good candidate for the position. Marianne felt that Pongpol would be able to make contacts with the Thai business community and help generate significant revenue from this group of successful entrepreneurs. Premuris had determined that due to the competitive nature of the financial services industry and the firm’s relatively small size, it was necessary for the company to branch out into more select niche markets. It was decided to begin with the Thai business community of Southern California.  Thailand, a country whose name means “free land” was never colonized by a foreign power. It is a very homogeneous country with approximately 84% of its citizens being Thai and 14% Chinese. Most Thai’s are Theravada Buddhists. Thailand is a constitutional monarchy and its people are considered nationalistic. The Thai flag consists of five horizontal stripes with the colors red, white, and blue. Red represents the nation, white represents Buddhism, and blue represents the monarchy. All three elements are important to most Thai’s. Thailand is sometimes referred to as the sixth “Asian Tiger,” with the other five being Japan, Korea, Hong Kong, Singapore, and Taiwan. Thailand experienced rapid economic growth during the 1980’s and strengthened its industrial base. Overbuilding and real estate speculation lead to a rapid devaluation of Thailand’s currency, the bath, in 1997. The devaluation and its subsequent financial turmoil put the Thai economy into a tailspin, and caused economic difficulties in neighboring countries.               It was in 1997 that Pongpol decided to leave Thailand and head for the United States to pursue a graduate degree. He had chosen USC because several friends had chosen to attend the school, and because of its location. While attending graduate school Pongpol worked as a waiter in a local Thai restaurant. The restaurant provided a place for him to live, and some badly needed income. While working in the restaurant, Pongpol met Stacy, a young American woman and the two became close friends. Because of his relationship with Stacy and the better employment opportunities in the United States, Pongpol decided not to return to Thailand when he completed his graduate degree.  Pongpol liked the idea of working in a financial services firm. While he had enjoyed his work back in Thailand in the bank, a financial services firm offered more prestige, and the opportunity to earn more money. He interviewed with a number of firms, however, the only offer he received was the one from Premuris. Marianne had presented a very bright picture of the opportunites for him in the firm, including the opportunity to advance into a managerial position within two years. It was with great excitement that Pongpol accepted the position with Premuris.  At first Marianne felt that the choice of Pongpol had been the correct one. Pongpol was very friendly with everyone in the office, he seemed eager to learn the financial services business, and he was very respectful to Marianne. Marianne tried to learn something about Thai culture including the typical Thai greeting of a wai. In the morning Marianne sometimes greeted Pongpol by pressing her fingers together as if to pray, and then bowed her head. She was surprised by Pongpol’s reaction. Expecting to receive a wai from him in return she instead simply received a smile and some brief laughter. It appeared to her that Pongpol was uncomfortable with the traditional Thai greeting. Marianne, unlike others in the office refused to refer to Pongpol by his nickname, Moo, when she discovered it meant pig in Thai.  As time passed Marianne began to question her decision to hire Pongpol. The first indication that her decision may have been a wrong one involved excessive requests from Pongpol for time off from work for various social activities. Although Pongpol was still viewed as motivated to learn his job, Marianne was disheartened at his frequent requests to miss work. On one occasion Pongpol had requested three days off since his family was visiting from Thailand and he wanted to show them the various tourist sites of Southern California. His family had never been to the United States, and so she understood Pongpol’s desire to make their visit enjoyable, however, she was completely taken aback when Pongpol asked for an additional two days to take them to Las Vegas. Marianne agreed that Pongpol could take three days off but refused to allow the additional two days for a trip to Vegas. Pongpol surprised and angered her when he did not report for work on those two days.


The Correlative Relationship between Value, Price & Cost

Dr. Richard Murphy, President, Central Marketing Systems Inc., Ft. Lauderdale, FL



A consumer goes to an electronics store to purchase a new television set. The consumer spends almost an hour listening to the salesperson, looking at and comparing different models. The consumer selects a model priced at $585. Did the television cost the consumer $585? Many people would answer "yes" because that was the price of it. But, there is a difference between the actual dollars charged as the price and the cost to the consumer. That customer had time and energy involved in the purchase in addition to the number of dollars paid. The cost to the consumer, then, must include all the resources that were used to make the purchase.  Today's consumer is bombarded with advertisements in all media, direct mail offers, and telemarketing offers for telephone long distance service. One of AT&T's ads boasts a rate of 7 cents per minute for long distance. The price is 7 cents but that is not the cost. Whether the ad is a commercial on television or an ad in a print document, there is a small caveat printed  - the consumer will be billed a monthly charge of $5.95 per month if they sign up for this long distance rate (Teinowitz, 1999). The actual cost to the consumer is a good deal more - $5.95 per month plus the 7 cents per minute.  This is the difference between the price and the cost.  We take this concept one step further in this paper – What was the value? Did the value equal the cost? There are numerous factors involved when we begin to discuss the issues of value, cost and price.  The value of anything is perceived by the customer, not the manufacturer or the vendor. Value is an abstract construct that the consumer determines based on a number of factors. The degree of risk in the purchase is also a factor in perceived value. Consumers must perceive that they receive a higher value from one vendor or from one product than another in order to purchase it.  The cost includes the actual price of the product or service but it also includes the 'hidden' costs, such as the time it takes to travel to the store or the time it takes to complete the transaction. The following pages more fully discuss the issues of cost, price and value. The marketing mix includes those variables that the marketing department can control in advertising a product or service. It is intended to convey to the consumer the value to them if they purchase this product or service. When this concept was first designed, it was called the 4Ps – product, place/distribution, pricing and promotion (Dennis, 1999). They represent the marketers’ bag of tools, an armory that can be manipulated to gain a competitive advantage over competitors (Carson, 1998; Dennis, 1999).  As time has passed many suggest that the 4Ps should be changed to the 4Cs (Dennis, 1999). The reason is that the 4Ps were devised in the industrial age when there was a focus on the product but in today's world, the focus is on the consumer (Carson, 1998; Dennis, 1999). In other words, marketers need to be customer-oriented rather than product or company-oriented. The 4Cs gives us a beginning understanding of how a company conveys value to the consumer. The 4Cs are: Customer Value:  What is the value of the product, what benefits would the buyer gain? Cost to the customer: As we began to explain above, what is the actual cost, this is equal to the price of the time and other costs the customer experienced to buy the product, e.g., travel to the store, time spent looking at the product or standing in the line (Dennis, 1999). Price is nothing more than an optimal economic number while cost is a social scientific construct that has to do with the customer's perception of how much it really cost them to buy the item (Carson, 1998). Convenience for the buyer: This has to do with channels of distribution – how convenient is it for the consumer to purchase this product (Dennis, 1999). Communication: Marketing can no longer be confined to a one-way communication mode whereby the company tells the consumer about the product, there must be two-way communication (Dennis, 1999). The only way to  know how customers perceive their costs or the value of the product is by asking them, which involves a dialogue in some way (Carson, 1998), even if it is a survey. This has to do with relationship marketing (Carson, 1998). The marketing mix provides a starting place for the company when marketing any product, be it new or a continuing item. These are the factors that must be considered if the company is going to have a successful marketing campaign. These are not the only factors to consider but when each is taken to its extreme, each will include nearly all of the different aspects of marketing campaigns.  Setting prices for any product or service is obviously a critical decision for the company. Schofield calls it "one of your most important and challenging responsibilities" (1999). The company must make a profit - that is a given, no profit means no more company. But, the company must also determine a price for the product that helps the company prosper but that is also attractive to the consumer. This requires a calculation that includes the costs the company incurred to develop, produce and then sell the product.  The cost is the "sum total of the fixed and variable expenses to manufacture or offer your product or service" (Schofield, 1999). Fixed costs include things like rent, insurance, office equipment, utilities, salaries for executives, property taxes, depreciation (Schofield, 1999). Variable costs include things like raw materials used to make the product, hourly wages, benefits, warehouse and shipping costs, commissions to salespeople, advertising and promotion (Schofield, 1999). Variable costs change depending on the amount of goods that are produced (Schofield, 1999).  Thus, fixed costs are those things that must be paid every month, or on whatever other regular schedule they are due and variable costs are those that vary, or change, depending on what product is being produced (Sifleet, 2002).  Variable and fixed costs need to be totaled and included in the cost of developing the product.  The price is set somewhere between the actual cost of producing the product and the ceiling, which is the highest price that could be set for the item (Schofield, 1999). The break-even point must be established, in other words, what must be charged in order for the company to just break even between the revenue obtained for the product and the expenses of producing it (Schofield, 1999). Sifleet offers an example of an analysis to determine the break-even point in a training consulting firm (2002). Their intended outcome from the analysis is to determine the appropriate rate to charge per hour for consultations (Sifleet, 2002). They begin by totaling all the fixed costs and arrive at a sum of $30,000 per year (Sifleet, 2002).  The variable costs include instructor's pay at $15 per hour (Sifleet, 2002). They then graph the costs for different amounts of billable hours per year (Sifleet, 2002). They also graph projections of revenue that is based at three different hourly rates for fees: $30 per hour, $35, and $50 per hour (Sifleet, 2002). In order to be profitable the company must generate more revenue than their costs and  from the graphs, they find that at $30 per hour, the business will have to generate at least $60,000 revenue in a year to break even, i.e., to simply cover their costs (Sifleet, 2002). They further calculate that to generate the $60,000, they will have to have 2000 billable hours (Sifleet, 2002). Taking this further, they determine how many hours will have to be billed each week and find that with a 50-week work year, 40 hours each week must be billed just to break even (Sifleet, 2002). Thus, the $30 per hour fee will not work; it is not realistic for the company because the only way they can gain a profit is by scheduling far more than 40 hours per week (Sifleet, 2002).  Further calculation tells the company that at $35 per hour, they need to schedule 30 hours per week and at $50 per hour, they need to schedule 17 hours per week (Sifleet, 2002). Remember, these are the break-even points, just covering their costs. These calculations demonstrate that the floor price is $35 hour but to make a profit, they are going to have to set a fee of $50 per hour.  So, the floor price is the break-even point and the ceiling price is what the market will bear, the highest price the consumer would pay (Sifleet, 2002). The appropriate price is somewhere in between these two extremes. The price must be high enough for the company to grow and low enough to be attractive to consumers.When pricing anything, the other factor that must be considered is the value of the product to the consumer. As we already stated, “consumers will pay a higher price for things they perceive to hold significant value for them” (Schofield, 1999).  Sifleet brings in the factor of perceived value into the equations, also.


The Accounting Crisis as a Result of the Enron Case, and Its Implications on the U.S. Accounting Profession

Dr. Dhia D. AlHashim, California State University, Northridge, California



In a free-enterprise economy, the integrity of the economic system is crucial to investors’ confidence. Lately, with the discovery of so many business scandals investors’ confidence in the corporate system and the accounting profession has eroded. The purpose of this research is to investigate reasons for the recent business scandals, particularly that of Enron Corporation, the impact on the U.S. accounting profession, and the lessons learned for developing nations. On August 14, 2001 Mr. Jeffrey Skilling, CEO, resigned from Enron; on November 8, 2001 Enron restated its earnings for the years 1997 through 2000.  On November 30, 2001 Enron filed for bankruptcy protection.  Enron wiped out $70 billion of shareholder value, defaulted on tens of billions of dollars of debts and its employees lost their life savings (pension plan consists of Enron’s stock). The question is:  Why Enron collapsed?  There is only one answer, in my opinion, and that is:  derivatives!  A major portion of these derivatives relates to the now infamous “Special Purpose Entities (SPEs).”   Enron Corporation was one of the pioneers of energy deregulation and became a major force in the trading of energy contracts in the U.S. and overseas markets.  Last year, the company was considered the seventh largest company in the U.S., with revenues exceeding $150 billion and assets of more than $60 billion.  It handled about one quarter of the U.S.’s traded-electricity and national gas transactions.  However, it appears that Enron’s success was not entirely due to the brilliant business strategies developed by its former chairman Ken Lay.  As the unraveling scandal shows, a significant portion is attributable to innovative financing and accounting strategies. There is no question that the continuation of deregulation of the economy and the privatization of services depends on the integrity of financial reporting systems.  Integrity can be achieved by having a fair and transparent accounting system. It is alleged that accountants are compromising their integrity, by manufacturing company’s earnings, for the sake of obtaining a piece of the act!   Observing unusual business events recently, leads us to the conclusion that it is not only Enron who is manufacturing earnings and hiding debts in subsidiaries and partnerships, with help of their accountants, many other U.S. companies are hiding trillions of dollars of debt in off-balance-sheet subsidiaries and partnerships, such as UAL ($12.7 billions), AMR-parent of American Airlines ($7.9 billions), J.P. Morgan Chase ($2.6 billion), Dell Computer ($1.75 billion), and Electronic Systems ($0.5 billion). This research investigates the impact of these recent business scandals, particularly that of Enron Corporation, on the U.S. accounting profession, with possible lessons learned for developing countries. Enron’s goal of becoming “the world’s greatest company” required a continuous infusion of cash.  This in turn demanded favorable debt/equity ratios and high stock prices.  To accomplish these goals, under the leadership of its former chief financial officer (CFO) Andrew Fastow, Enron developed an increasingly complex financial structure and utilized a bewildering network of partnerships and SPEs   To generate the cash, Enron formed a new SPE: Chewco, consisting of Enron executives and some outside investors (see Exhibit1).  To take advantage of loopholes in generally accepted accounting standards in the U.S., companies establish SPEs, by having outside investors contribute 3% of capital of these SPEs so that they can be considered independent and off the balance sheets for those corporations who contribute 97% of the invested capital!  By creating these SPEs, Enron was no longer required, per U.S. generally accepted accounting standards, to include in its financial statements the assets and the liabilities of the SPEs of which it owned 97%!. Enron, thus, funneled from its balance sheet a substantial amount of liabilities, and eliminate from its income statement hundreds of millions of dollars of expenses and included false gains on its speculative investments in various technology-oriented companies. The net impact of these practices was the creation of financial powerhouse façade that mislead investors. Enron may have been just an energy company at its inception in 1985, but by the end it had become a full-blown OTC derivatives trading company.  Its OTC derivatives-related assets and liabilities- increased more than five-fold during the year 2000 alone.  Since OTC derivatives trading is beyond the purview of organized, regulated exchanges, Enron fell into a regulatory black hole!  Enron collapsed because of the derivatives deals it enter into with its more than 3,000 off-balance sheet subsidiaries and partnerships-such as JEDI, Raptor and LJM.  Derivatives are complex financial instruments whose value is based on one or more underlying variables, such as the price of a stock, interest rate, foreign exchange rate, index of prices or rates, commodity price (the cost of natural gas), or other variables.  The size of derivatives markets typically is measured in terms of the “notional amounts (a number of currency units, shares, bushels, pounds, or other units specified in the contract).”  Recent estimates of the size of the exchange-traded derivatives market, which includes all contracts traded on the major options and futures exchanges, are in the range of $13 to $14 trillion in notional amount.  By contrast, the estimated notional amount of outstanding OTC derivatives as of year-end 2000 was $95.2 trillion, which represents about 90% of the aggregate derivatives market, with trillions of dollars at risk every day. Derivatives can be traded in two ways: on regulated exchanges or in unregulated over-the-counter (OTC) markets.  The latter is what Enron capitalized on in dealing with its derivatives, which capitalized on the inaction of the U.S. Commodity Futures Trading Commission and passage of the Commodity Futures Modernization Act by the U.S. Congress in December 2000, under which the U.S. Congress made the deregulated status of derivatives clear.


The Relationship Between Dividends and Accounting Earnings

Dr. Michael Constas,  California State University, Long Beach, CA

Dr. S. Mahapatra, California State University, Long Beach, CA



This research examines the relationship between dividends and earnings.  The model used here is a variation of the model tested in Fama and Babiak (1968), which has not been altered by other empirical literature.  The importance of this model is underscored by Copeland and Weston (1988), Kallapur (1993), and Healy and Modigliani (1990), which was used to examine the influence of inflation in dividend policy.  This research, however, differs from the Fama and Babiak model in important respects.  The Fama and Babiak model is linear, while the model tested in this research is a linear logarithmic transformation of a nonlinear relationship.  The Lintner (1956), and Fama and Babiak (1968) model has an additive error term with a normal distribution, whereas the model tested herein assumes that the underlying relationship has a multiplicative error term with a lognormal distribution. The empirical results reported in this paper reflect an improvement over the results obtained by using the original Fama and Babiak (1968) model.  The Fama and Babiak (1968) study involved running separate regressions for each firm.  In the revised model (used here), the cross-sectional parameters are significant, and, in both cross-sectional and separate firm regressions, the revised model produces higher adjusted R2s than is produced by the Fama and Babiak model. The Fama and Babiak (1968) model is based upon the premise that a firm’s current year’s dividends reflect its current year’s earnings.  The prior year’s dividends are subtracted from both the current year’s dividends and earnings in order to produce the change in dividends as an independent variable.  The empirical results reported here, however, suggest that the presence of the prior year’s dividends as an independent variable is an important part of the relationship between dividend changes and earnings changes.  Current dividends appear to be adjusted when a firm experiences earnings that are inconsistent with prior dividend declarations.          This adjustment can be explained in two ways.  First, it may be that a firm readjusts its dividends when it experiences inconsistent earnings because its ability to pay dividends has changed.  Second, the adjustment may be due to the fact that dividends serve as management’s signal as to how the firm is expected to perform in the future, and this signal changes due to new information.  If the second explanation were to be correct, dividends would offer important information regarding management’s expectations of future earnings of a firm. The model developed in this section has strong similarities to, but important differences from, the model tested in Fama and Babiak (1968), which was based upon a model developed in Lintner (1956).  The following terms and the meanings set forth opposite them below: As noted in Fama and Babiak (1968), the dividends declared during any year by a firm reflect the earnings of that firm for the current year: The Fama and Babiak (1968) model assumed that a linear relationship between (dit+1/dit) and (eit+1/dit) with an additive error term, similar to the following: The relationship between (dit+1/dit) and (eit+1/dit) also could be structured as a nonlinear relationship with a multiplicative error term, and the error term could have a lognormal distribution.  If this were the case, then a logarithmic transformation of that relationship would produce equation (3.6): The issue of whether to model a relationship using an additive error term with a normal distribution or a multiplicative error term with a lognormal distribution is discussed in Judge, et. al. (1980, pgs. 844-45).  Judge suggests that a test outlined in Leech (1975) may be used to determine whether a version of the basic model using an additive error term or a multiplicative error term is more appropriate for a given data set.  The Leech test provides that the version of the basic model producing the higher value for the log likelihood function is the more appropriate version of that model.  Equations (3.5) and (3.6) were tested using the Leech test, and equation (3.6) produced the larger log likelihood value. A separate OLS regression is run for each firm across all years.  In order to be included in this regression, a firm needed to have observations for at least 15 years.  To take into account annual differences in dividend payment policies, a variable consisting of the median dividend per share for the sample for the current year (determined prior to the screen for “higher” P/E ratios) divided by the median dividend per share for the sample for the prior year (determined prior to the screen for “higher” P/E ratios) was included.


Trade and the Quality of Governance

Dr. Fahim Al-Marhubi, Sultan Qaboos University, Sultanate of Oman



Different strands of the trade and governance literature imply a link between the openness of an economy to international trade and the quality of its governance. This paper tests this proposition using a newly created dataset on governance that is multidimensional and broad in cross-country coverage. The results provide evidence that the quality of governance is significantly related to openness in international trade. This association is robust to alternative specifications, samples, and governance indicators. The last decade has witnessed an explosion of research on economic growth. Two issues that lie at the heart of this research include the role of international trade and that of governance in promoting growth and better development outcomes. However, due to conceptual and practical difficulties, these two lines of research have run in parallel without explicit recognition of each other. Conceptually, the relationship between openness and governance has been left rather imprecise, with a notable absence of a convenient theoretical framework linking the former to the latter. Practically, the difficulty lies in defining governance. While it may appear to be a semantic issue, how governance is defined actually ends up determining what gets modeled and measured. For example, studies that examine the determinants of governance typically tend to focus on corruption (Ades and Di Tella, 1999; Treisman, 2000). However, governance is a much broader concept than corruption and little has been done to address the other dimensions of governance discussed in the next section. The purpose of this paper is to investigate more systematically the link between the openness of an economy and the quality of its governance. A practical difficulty that arises, however, in trying to estimate openness’ exogenous impact on governance in a cross section of countries is that the amounts that countries trade is not determined exogenously. Openness may be endogenous since it is quite likely that countries that can manage risks and exploit opportunities from trade because of their high quality governance choose or can afford to be more open. Hence, better governance can lead to greater openness rather than the other way round. As a result, correlations between openness and governance may not reflect an effect of trade on governance.  In estimating the impact of openness on governance, what is needed is a source of exogenous variation in openness. Using measures of countries’ trade policies in place of (or as an instrument for) trade will not solve this problem since countries that have better governance may also adopt free-market trade policies. To cope with this problem, this paper estimates trade’s impact on governance by instrumental-variable estimation using the component of trade that is explained by geographic factors as an instrument for openness. This instrument, constructed by Frankel and Romer (1999), exploits countries geographic characteristics (countries’ sizes, populations, distances from one another, common border or not, and landlocked or not) as a source of exogenous variation in trade. The suitability of this instrument rests on the premise that geography is an important determinant of countries’ (bilateral as well as total) trade, and that countries geographic characteristics are unlikely to be correlated with their governance, or affected by policies and other factors that influence governance.  Using a newly created dataset on governance that is multidimensional and broad in cross-country coverage, the results indicate a significant positive relationship between the openness of an economy and the quality of its governance. This association is robust to changes in specification, datasets, and indicators of governance.  There have been a number of different attempts at defining governance (World Bank, 2000). Despite the absence of a precise definition, a consensus has emerged that governance broadly refers to the manner in which authority is exercised. Defined in this way, governance transcends government to include relationships between the state, civil society organizations, and the private sector. It includes the norms defining political action, the institutional framework in which the policymaking process takes place, and the mechanisms and processes by which public policies are designed, implemented, and sustained. Frequently identified governance issues include the limits of authority and leadership accountability, transparency of decision-making procedures, interest representation and conflict resolution mechanisms.  If governance is difficult to define, it is even more difficult to measure. Empirical studies focusing on either a time series or cross-sectional context have deployed a variety of proxy measures, spanning indicators of civil liberties, political violence frequencies, investor risk ratings, surveys of investors, to aggregation of indexes. This paper relies on the recent definition proposed and the proxy measures constructed by Kaufmann et al. (1999b). Kaufmann et al. (1999a: 1) define governance as “the traditions and institutions by which authority is exercised. This includes (1) the process by which governments are selected, monitored and replaced, (2) the capacity of the government to effectively formulate and implement sound policies, and (3) the respect of citizens and the state for the institutions that govern economic and social interactions among them.”  Operating on the principle that more information is generally preferable to less, Kaufmann et al. (1999b) aggregate governance indicators from several sources into an average or composite indicator – a poll of polls. The raw data used to construct the composite governance indicators are based on subjective perceptions regarding the quality of governance, often drawn from cross-country surveys conducted by risk agencies and surveys of residents carried out by international organizations and other non-governmental organizations. Using an unobserved components methodology, Kaufmann et al. (1999b) combine more than 300 related governance measures into six aggregate (composite) indicators corresponding to six basic governance concepts, namely Voice and Accountability, Political Instability and Violence, Government Effectiveness, Regulatory Burden, Rule of Law, and Graft.


Caribbean Economic Integration: The Role of Extra-Regional Trade

Dr. Ransford W. Palmer, Howard University, Washington, DC



This paper examines the feed-back effect of extra-regional trade on intra-regional imports of the Caribbean Community (CARICOM). Because of  the non-convertibility of CARICOM currencies, intra-regional trade must be settled in hard currency, typically the U.S. dollar.  It is argued that the growth of extra-regional trade generates foreign exchange which stimulates the growth of gross domestic product and intra-regional imports. Over the past thirty years, there has been an explosion of common market and free trade arrangements around the world, all of them designed to foster trade and promote economic growth. NAFTA and the European Economic Community are the two dominant ones. But in Africa and Latin America there are numerous others. Theoretically, the benefits from these arrangements seem particularly attractive for groupings of developing countries, but in practice numerous obstacles tend to hinder their full realization. This is particularly the case of CARICOM, a grouping of small Caribbean economies  where the benefits tend to be constrained by their small size and openness, among other things. This paper examines the impact of extra-regional trade on the economic integration effort.  After the failed attempt at political union in the English Caribbean in 1961,  the search for economic cooperation led to the creation of the Caribbean Free Trade Association (CARIFTA) in 1969.  In 1973 the Treaty of Chaguaramas  replaced CARIFTA with  the Caribbean Community  and Common Market (CARICOM) and set  the following objectives (Article 3 of the Annex to the Treaty): the strengthening, coordination and regulation of economic and trade relations among Member States in order to promote their  accelerated harmonious and balanced development; the sustained expansion and continuing integration of economic activities, the benefits of which shall be equitably shared taking into account the need to provide special opportunities for the Less Developed Countries; the achievement of a greater measure of economic independence and effectiveness of its member states, groups of states and entities of whatever description.  In the three decades since 1973, efforts to achieve these objectives have been buffeted by major external shocks. The oil shocks of the 1970s favored the only oil producer in CARICOM, Trinidad and Tobago, and punished all the oil importers. The recession that followed in the industrial countries of North America and Europe curtailed Caribbean exports. And the rise of socialist governments in the Caribbean during the 1970s choked off foreign private investment and crippled economic growth, particularly in Jamaica and Guyana. Unilateral trade concessions provided in the 1980s  by the United States (the CBI), Canada (CARIBCAN), and Europe (the Lomé Convention) helped to offset some of the negative impact of these shocks  but they also reinforced the extra-regional export orientation of the Community.  The 1990s saw a gradual weakening of some of these preferential trading arrangements under the rules of the World Trade Organization. Because of the importance of external markets, these external influences have caused individual member countries to focus more on expanding these markets  than on expanding the intra-regional market.  As a consequence the urgency of integration has diminished.  The region’s ability to benefit fully from economic integration is restrained by four principal factors: the limited mobility of labor and capital, the absence of a common regional currency, the slowness of establishing a common external tariff, and the similarity of products produced for export. The mobility of capital has been limited by cross-country diversity in legislative and development strategies. (IMF Staff Report, 1998). And a regional stock exchange that could enhance capital mobility is yet to emerge.  But the biggest restriction on capital mobility lies in the non-convertibility of the national currencies. As a result, the transactions cost of doing business is high and capital does not always move into its most productive uses.   There appears to be no great urgency on the part of political leaders about creating a common currency. At their 1984 meeting in the Bahamas, the heads of governments rejected the concept of a single currency, preferring  instead to make the US dollar the common unit of exchange by pegging their currencies to it. This reluctance to create a single currency is attributed to the fact that such a currency would require a common monetary policy and therefore a  regional central bank – a step that would undermine the sovereignty of national monetary policy.  It is the non-convertibility of Caribbean currencies that makes the US dollar the currency of settlement in intra-regional trade.  (Williams, 1985).  This means that the growth of intra-regional trade is limited by the availability of US dollars. Restriction on labor mobility among CARICOM countries is motivated by the fear that some countries may export their unemployment to others. This was a major contributing factor to Jamaica’s break-away from the Federation of the West Indies in 1961. It saw  itself as being inundated by an inflow of labor from the high unemployment economies of the Eastern Caribbean.   Some marginal steps have been taken to improve labor mobility. Nine member states have so far  agreed to eliminate the need for work permits of  CARICOM nationals who are university graduates, artistes, sports persons, musicians, and media workers   Eight member states also accept forms of identification other than passports from CARICOM nationals to facilitate inter-island travel.   


What’s in an Idea?  The Impact of Regulation and

Rhetoric on the US Food Supply Chain

Dr. Lorna Chicksand, The University of Birmingham, Birmingham, UK



This paper seeks to explore the relationship between government and business through an examination of regulation pertaining to the US agri-food sector.  It will be argued that regulation can act as a power resource, determining who appropriates value in the supply chain.  However, political intervention in the market creates differentially advantageous positions for some to the detriment of others and, as such, the political allocation of rents is a dynamic process in which firms compete to control this allocation. Thus, a further argument of this paper is that other power resources are available to firms which can be used as countervailing sources of power to undermine and overturn regulation.  In particular, the paper will focus on the role of ideas as ‘weapons’, which can be used by firms as resources to overturn unfavourable regulation.  This paper will argue that the policy changes brought about under the 1996 Farm Bill (which replaced the New Deal-era target price/deficiency payment structure for feedgrains, wheat, cotton and rice with ‘production flexibility contract payments, thus decoupling the payments from either the commodity price or the amount of croup produced) could only be brought about by a corresponding change in the ideas which underpinned agricultural policy.  It will be argued that these policy changes, which favoured agribusiness interests at the expense of production agriculture, were the result of a long-term campaign waged by agribusiness to change the terms of debate within which US agricultural policy was framed.  Although ‘decoupling’ had been on the agricultural agenda since as early as the 1950s, the paper will argue that more wholesale changes did not occur earlier because: (1) production agriculture acted as a countervailing interest to agribusiness; and, (2) the farm fundamentalist ideology had become “locked-in” to the AAA, and had become ‘cemented’ institutionally.  However, by the 1980s, the agri-food supply chain had become increasingly integrated, with agribusiness assuming far more influence over policy direction than production agriculture and, as such, could more rigorously work to discredit the farm fundamentalist ideology.  They launched an ideological campaign, utilising the rhetoric of globalisation, competition and markets to redefine the problems facing agriculture.  In doing so, they successfully changed the ideas which framed agricultural policy, which enabled more wholesale policy changes to be implemented. It is my contention that ideas are represented in the policymaking process in the form of ‘policy paradigms’.  According to Hall (1993), policy paradigms delineate the boundaries of the policymaking process by prescribing policy goals, instruments and settings. He states (1993: 279) that: policy makers customarily work within a framework of ideas and standards that specifies not only the goals of policy and the kind of instruments that can be used to attain them, but also the very nature of the problems they are meant to be addressing…[T]his framework is embedded in the very terminology through which policymakers communicate about their work, and it is influential precisely because so much of it is taken for granted and unamenable to scrutiny as a whole.  Policy paradigms, therefore, perform a “boundary-setting function” and do so in terms of political discourse.  However, political discourse is not a given.  As Hall (1993: 289) himself states: “the terms of political discourse privilege some lines of policy over others, and the struggle both for advantage within the prevailing terms of discourse and for leverage with which to alter the terms of political discourse is a perennial feature of politics”.  Within this framework, it is how issues are defined, which is crucial for understanding how policy evolves.  Hall et al. (1978: 59) point out how important problem definition is in affecting whether an issue will even reach the agenda or not: the primary definition sets the limit for all subsequent discussion by framing what the problem is.  This initial framework then provides the criteria by which all subsequent contributions are labelled as ‘relevant’ to the debate, or ‘irrelevant’ – beside the point.. Furthermore, drawing on insights from the constructionist approach, I would argue that problem definition can be discursively constructed.  Whilst I do not wish to ascribe to the view that all reality is a linguistic construction and a product of human subjectivity, I believe that the constructionist approach is useful in that it draws our attention to the role of agency and the use of language in ideational/discursive constructions.  Subject to structural constraints, it is possible for agents to construct ‘realities’ which ‘necessitate’ new policy responses.  However, language is not neutral; it is bound up with notions of power, competition and conflict.  Fairclough (1989: 90) notes that, in the struggle over language: what is at stake is the establishment or maintenance of one type as the dominant one in a given social domain, and therefore the establishment of certain ideological assumptions as commonsensical…The stake is more than ‘mere words’; it is controlling the contours of the political world, it is legitimising policy, and it is sustaining power relations. Although the American government had been involved in the promotion of agriculture since the 1860s,  it has been argued that the AAA radically altered government’s relationship to agriculture.  This Act was passed to confront the severe problems within the agricultural sector, which witnessed farm prices dropping by fifty-six percent and gross farm income being halved between 1929 and 1932.  However, the AAA did not ‘come out of the blue’; it was the culmination of a more widespread ideational battle regarding the role of the government in managing the economy. 


Reform and Corporate Governance in Russia

Dr. Jennifer Foo, Stetson University, Deland, FL.



 This paper looks at some issues in enterprise restructuring and reforms in Russia.  The paper looks at the characteristics of privatization and the Russian corporate governance or the lack of it.  The issues of corporate governance and enterprise reforms are particularly important for transitional economies when confronted with the realities of market discipline and global competition.  This paper also looks at the efforts to establish a corporate governance system in Russia.  An empirical investigation was performed to compare Russia's transition progress to that of other eastern bloc countries such as Poland and Hungary.  An investigation of Russia's enterprise reforms and corporate governance may also stimulate institutional changes in Russia and other former socialist countries. In the past decade, post-communist countries of Russia and Eastern Europe have carried out transitional reforms.  Efforts have been made to privatize state-owned enterprises (SOEs) by transferring ownership to the private-sector owners.  The initial transition efforts paid off in significant gains in real GDP growth for most of the transition countries as Table 1 indicates.  Russia, however, experienced negative growth rates and insignificant growth after the transition.  The past decade has shown that countries like Poland, Hungary and the Czech Republic are weathering the transition relatively well while Russia and Romania are encountering serious transitional problems.  Privatization, in itself, is insufficient to effect a successful transition to a market economy. What is needed is effective privatization complemented by structural reforms in the legal sector to support and enforce the reforms. Privatization has to occur if a post-communist country is to transform its planned, state-owned economy to a market economy. Privatization promotes economic growth when shareowners have an incentive to maximize wealth through firm value.  Successful privatization has to consider reforms in three general dimensions: an effective corporate governance system, policies that support business enterprise, and a legal system that protects stockholder rights.  The initial phase of privatization is not expected to be the most optimal as evidenced by the negative real GDP growth of most the transition countries.  Poland, Hungary, the Czech and Slovak Republics have experienced consistent positive growth in real GDP in the second half of the decade since the transition process began.  However, Russia and Romania have the least progress.  In Bulgaria and Romania, where the transition governments are weak, and in Russia where there is greater political instability, the privatization programs opened up opportunities for managers to strip enterprise assets and maximize personal cash flows.  Consequently, the public's perception of economic injustice from the privatization process undermined the support for privatization, particularly when the standard of living deteriorated after privatization.  The examples of the Polish and Czech privatization show, first, if there were no firm and clear consensus by the political authorities as to how shares are to be allocated, the privatization plan would be doomed to failure as in the Polish experience.  Secondly, if the government supports and provides an opportunity for the private sector to create investment-holding firms and provides safeguards against fraudulent holding firms, the privatization plan has a better chance to succeed as in the Czech experience. Russia initiated privatization of its state-owned enterprises around 1992.  The transfer of SOEs assets to the private sector can occur in three forms of management-employee buyouts (MEBOs) through the voucher system.  The privatization had significant impact on the enterprise sector.  The enterprise goal emphasis shifted from political and defense targets maximization under planned administration to profit maximization and wealth accumulation.  The Russian government recognized the importance of an effective corporate governance system but the notion of worker's rights and privileges is still strong in post-communist Russia. Communist doctrines are still prevalent in emphasizing full employment regardless of workers' productivity and the privatization process which exacerbated the unemployment problems.  As a result, the regional governments, with power decentralized to them in the post-communist era, are even more resistant to change and fiercely protect their workers' employment because of their closer ties to the local SOEs.  In Russia, as in other transitional economies, depoliticization of the privatization process and the resource allocation process is crucial in severing or reducing dependence on the state by the SOEs and rent seeking behavior by former political elites.  Da Cunha and Easterly (1994) found that selected enterprises and financial conglomerates received massive financial flows from the Russian government in the early privatization period from 1992-93 which totaled 33% of the GDP.  Due to the no-cost "voucher giveaway" transfer of SOE assets to the MEBO stockholders, the privatized SOEs are resistant to change with little incentive to maximize firm value. The government's lack of fiscal commitment to a hard budget also encouraged Russian managers to depend on the state for soft-budget credits.  This only bolstered managers' lack of motivation to change, particularly when they knew that bankruptcy laws were not strictly enforced and difficult to enforce. 


Information Communication Technology in New Zealand SMEs

Dr. Stuart Locke, University of Waikato, Management School, New Zealand

Dr. Jenny Cave, University of Waikato, Management School, New Zealand



The New Zealand Government has shown a concern to promote the use of information communication technology (ICT) by New Zealand small to medium size enterprises (SMEs).  There has been an enquiry into telecommunication regulation and an ongoing commitment to an E-summit programme.  The latter involves both Government and enterprise in ongoing dialogue and public fora.   In the May 2002 Budget for fiscal year beginning July 1, 2002 Government is introducing a new regional broad banding initiative to assist where private telecommunication companies find it not profitable to upgrade the infrastructure.  This study investigates the perceptions of SMEs, as solicited through a quarterly SME survey conducted for the Independent Business Foundation.  The survey is now into its third year and provides the opportunity for monitoring changing sentiments and addressing new issues as and when they arise. The perceptions of various groups integrally involved with the small medium enterprises (SMEs) sector, regarding information communication technology (ICT) are analysed in this paper.  The Economist Intelligence Unit/Pyramid Research (EIU) study (2001) ( into levels of E-preparedness ranked New Zealand 20th down from 16th the year before.  While the impact of ICT across the whole business sector is important, it is essential that the SME sector, including the micro businesses, should capture some of the efficiency gains.  Government has continued to push ICT but there has been an increasing disquiet that business is not moving quickly enough to catch the knowledge wave.  Science Minister, Hon Peter Hodgson, addressing a pharmaceutical conference in March 2002, observed “I have watched us miss the ICT bandwagon, if I can be blunt.  And it’s not going to happen again.” (New Zealand Herald, p E3).Trade NZ, a government department, notes the importance of unleashing the potential gains from ICT for SMEs in underpinning their recent programme of assistance: New Zealand has no other option but to adopt e-business and increase participation of its SMEs in the global economy. E-Business has the potential to expand the country’s current exports and grow the number of new exporters. Since uptake of true e-commerce is slow among exporters and other companies, the New Zealand Trade Development Board (Trade New Zealand) has taken on a leadership role through a NZ$10 million project supported by additional funding from the Government. (Trade NZ 2001).  In the absence of a commercial imperative or a large stick/carrot regime it may be relatively easy to succumb to complacency in times of reasonable economic growth.  Currently, agricultural exports are doing relatively well given the higher international prices for commodities.  Nevertheless, it is generally recognised that long-term sustainable competitive advantage needs to be built upon a strong foundation in the knowledge economy.  With a small population, a relatively open economy, heavy compliance regimes relating to occupational-safety and health, resource management, employment relations, and the burden of social welfare vis B vis other emerging knowledge economies there are multiple challenges to be faced.  The SME sector, and in particular the micro business sector, is a very large component of the New Zealand economy.  If ICT offers the opportunity of reducing costs and enhancing supply chain efficiency, then it is important that these potential gains accrue to the SME sector.  At the national level telcos (telecommunication companies) continue to dispute interconnection agreements.  “After Telecom refused to switch WorldxChange’s toll bypass phonecalls to Clear’s network, WorldxChange complained to the Commerce Commission on Friday June 1 accusing Telecom of abusing its market power” (New Zealand Herald 9 June, 2001).  It is generally true that competition works to limit the extent to which there are dead weight losses in the system (Williamson 1996, p.197).  However, the ICT environment reflects an ineffectual regulatory and compliance policy framework, which is a typical problem in a heavily bureaucratic structure where administrative process is the objective rather than tangible efficiency gains.  The EIU makes this point forcefully commenting, “The importance of a regulatory regime geared to e-business is clear in our rankings; it is the main factor that puts Australia 18 places ahead of its neighbour New Zealand, which ranks only 20th.”  This is despite the government’s involvement in a number of initiatives such as E-summit and ECAT (Electronic Commerce Action Team).  The efficacy of these policies needs to be considered in the light of New Zealand deteriorating international ranking.  Government policy, in New Zealand, relating to small to medium enterprises (SMEs) has altered significantly during the last decade (Nyamori and Lawrence, 1997).  The changes have not followed a consistent pattern but rather have promoted considerable uncertainty in the environment.  Commenting on the then most recently announced policy for SMEs Welham (2000, p41.) suggests, “they are ‘reinventions of the wheel’ for it has all been done before.”   Scrimgeour and Locke (2001) review the decade from 1990, concluding that Government policy in a range of areas appears, among SMEs to have a low credibilityThe SME survey has been conducted quarterly since 1999.   The telephone interview consists of two parts.    First, there are questions relating to the level of operating activity and these are asked each quarter.  In addition several special interest questions are asked.  These typically relate to the topical issues and the responses are prepared for business professional magazines.  The minimum sample size of 400 provides a level of margin of error of less than 5%.  The typical survey consists of 1,200 calls to allow for meaningful regional and industry comparisons to be made.  Sample selection is generated from ‘yellow pages’ telephone listings.  The sample is programmed subject to constraints.  Specifically two parameters are considered.  First, the regions are balanced to ensure that more than 30 enterprises are selected in each chosen region.  This biases the sample against the geographical concentration of Hamilton north.  Similarly, the industry profiles are not representative of the proportions operating in the economy by rather ensure that minimum samples sizes are greater than 30. 


The Internationalisation Process of UK Biopharmaceutical SMEs

Dr. Cãlin Gurãu, School of Management, Heriot-Watt University, Riccarton, Edinburgh, Scotland



The classical models and theories of internationalisation have considered export and out-licensing activities to be the main modes of entry on international markets. The structural changes in the global economy, the emergence of high technology industries and the increased involvement of SMEs in international activities are challenging these theories. The development cycle of new products and technology has become long, complex and extremely costly. The lack of specialised resources on domestic market has forced the high-technology SMEs to initiate early their internationalisation in order to access essential complementary resources on a global basis. This paper investigate the internationalisation model and the entry modes of UK Biopharmaceutical SMEs. Gurău and Ranchhod (1999) have shown that biotechnology is an industrial sector in which internationalisation is likely to occur, because:  (the sources fuelling the biotechnology industry are international (i.e. finance, knowledge, legal advice, etc.) (Acharya, 1998 and 1999; Russel, 1988); (the marketing of biotechnology products and services is international (Acharya, 1999; Daly, 1985); (the competition in the biotechnology sector is international (Acharya, 1999; Russel, 1988); (the international community (Acharya, 1999; Bauer, 1995; Nelkin, 1995; Russel, 1988) closely scrutinizes the scientific or industrial developments in biotechnology. The large pharmaceutical and chemical corporations which began to diversify their activity into biotechnology from the early eighties, had the managerial expertise and the financial resources to develop this activity on a global basis (Daly, 1985). They used their existing networks of international assets in order to solve the problems related to the novel technologies and emerging markets and to defend their dominant position within the industrial markets (Chataway and Tait, 1993; United Nations, 1988).  On the other hand, the small and medium-sized biotechnology enterprises (SMBEs) are confronted with important problems in their process of internationalisation: limited financial resources, the management and processing of huge amounts of information, restrictive regulations, unfamiliar market environments, etc. These represent significant entry barriers on the foreign markets (Acs et al., 1997; Chataway and Tait, 1993; Daly, 1985; OECD, 1997). In spite of these problems, the global competition and the structural limitations of their domestic market compel them to become international (Acs and Preston, 1997; Fontes and Coombs, 1997; Daly, 1985).  This paper attempts to investigate the internationalisation model specific for the U.K. small and medium-sized biopharmaceutical  enterprises (SMBEs), with a special emphasis on the market entry modes designed and implemented by these firms.  The classical internationalisation theories are mainly based on two models: the Uppsala model, developed by Johanson and Wiedersheim-Paul (1975), and then refined by Johanson and Vahlne (1977, 1990); and the Management Innovation model, described in the work of Bilkey and Tesar (1977), Cavusgil (1980), Czinkota (1982) and Reid (1981).  The evolution of the company from a mainly domestic activity to a fully international profile is described as a slow, incremental process which involves the gradual acquisition, integration, and the use of knowledge concerning the characteristics of foreign markets, as well as an increasing commitment of the company’s resources towards international activities. The model also predicts that a firm will first target the markets that are most familiar in terms of language, culture, business practice and industrial development, in order to reduce the perceived risk of the international operations and to increase the efficiency of information flows between the firm and the target market (Johanson and Vahlne, 1977 and 1990).  The classical theories of internationalisation have been extensively challenged over the years, with numerous scholars advancing various criticisms of their validity and assumption (Knight and Cavusgil, 1996). These criticisms helped to refine the outline of the previous models, either regarding the incremental characteristics of the internationalisation process (Johanson and Vahlne, 1990), or the main causes and factors that determine and influence the evolution of a company through different stages (Reid, 1984; Welch and Luostarinen, 1988). Cavusgil and Zou (1994), Reid and Rosson (1987) and Welch and Luostarien (1988) show that the initiation of international operations is usually the result of careful strategic planning, which takes into consideration a wide array of factors such as the nature of the foreign market, the firm’s resources, the type of product, the product life cycle, and the level of anticipated demand in the domestic market. On the other hand, the path to internationalisation does not have to necessarily follow the prescribed stages of development, with many other combinations of strategic options being available to the companies (Reid, 1983; Rosson, 1987; Turnbull, 1987). For example, initially, international sales can be realised through a joint venture or an international network of strategic alliances (Hakansson, 1982). Other companies may also become international following alternative paths such as licensing, manufacturing or collaborative arrangements, without ever engaging in export activities (Carstairs and Welch, 1982/1983; Reid, 1984; Root, 1987). The definition of small and medium-sized enterprises has fluctuated over the years, using as main criteria the number of employees and the annual turnover. For the purpose of this study  firms with up to 50 employees will be considered to be small and  medium-sized as those employing 51-500 people. This classification is used by the UK Centre for Exploitation of Science and Technology (Keown, 1993). It is widely accepted that SMEs have characteristics different from larger companies (Carson et al., 1995; Jennings and Beaver, 1997). These differences are reflected by three main features (Levy and Powell, 1998):- the SMEs have limited internal resources;- the SMEs are managed in an entrepreneurial style;- the SMEs have usually a small influence on the market environment. The specificity of the biopharmaceutical sector creates a series of problems and advantages for the international marketing activities of SMBEs.


Impact of Company Market Orientation and FDA Regulations on Bio-Tech Product Development

Dr. L. William Murray, University of San Francisco, CA

Dr. Alev M. Efendioglu, University of San Francisco, CA

Zhan Li, Ph.D., University of San Francisco, CA,  

Paul Chabot, Xis, Inc., San Francisco, CA



New products produced by Bio-Technology firms – products designed to treat, or cure, human illnesses –  require large investments ($150 million +) and take a long time (10-12 years) from idea generation through product launch.  These products require full authorization by the U.S. Food and Drug Administration (FDA) before the developing firms are permitted to sell them for use by patients.  Little is known about the management processes by which these products are developed.   Even less is known about the impact of the FDA regulation on the manner in which these products are developed, produced, and distributed.  The purpose of this paper is to report the results of a recent survey of professionals employed by Bio-Tech firms to develop new products. The FDA must approve all new pharmaceutical and medical device products designed for use by individuals.  A firm interested in developing a new pharmaceutical product must file an application with the FDA, state the goal and define the approach towards discovering possible new products, and provide the FDA with a detailed statement as to how the development process will be managed.  If approved, the firm can take the first steps towards developing the product, each step of which must be recorded, analyzed, and summarized in performance review reports to the FDA.  Three earlier studies researched the possible impacts of FDA regulations on and marketing of new products. An earlier study of development and production process of diagnostic-imagining equipment suggested that for this type of medical devices FDA regulation had little effect.  A later study by Rochford and Rudelius (1997) suggested that there are regulatory influences and impacts on product development, if one examines the number of development activities (i.e., stage gates) that the firm performed in their development of a new product. A more recent third study of medical device producers by Murray and Knappenberger (1998) further elaborated the relationships between product regulation, the manner in which the product was developed, and the market success of the new product.  It concluded that the act of regulation increased the new products “time to market”; i.e., the amount of time it took the firm from idea generation through final product launch.  Other research studies have looked at how collaboration among process partners, relationships between the firm and its customers, and managerial effectiveness impacted the success of new product development.  Langerak,, (1997) reported that developers of new products found the more turbulent the environment in which the product was being developed the greater the importance (to market success) of both internal, within-firm, and external, between-firms, collaborations.  Since most new pharmaceutical products are the results of collaborative efforts it would appear to be likely that such collaborations would be even more successful in the economically, socially, and politically turbulent world of drug development. Avlonitis and Gounaris (1997) found that the more “oriented” the firm towards its market the greater the probability of market success of its products and of the firm in general, and in an analysis of the banking industry, Han and Kim (1998) discovered a direct link between the firm’s orientation and its organizational effectiveness, as suggested earlier by Ruekert (1992).  Bio-Tech firms develop and market products that are strictly regulated and as such, have to deal with a very diverse set of customers and meet their divergent objectives.  The new products must be approved (regulatory process) for sale, they have to be sold not to but through (distribution channel) the medical community, they must be “approved” for reimbursement by the patient’s insurance company or HMO (payment for the product), and finally, must meet the patient’s needs (gain value based on need).  In developing and marketing a new product, the Bio-Tech companies have to address and accommodate these four distinct and different customer objectives and be successful in meeting their primary organizational profit objective.   However, given the high degree of specialization in one or more of the tasks required to produce a new product, the high cost, and the very long time it takes to market these products, it is difficult to judge whether any of these firms has really succeeded in their efforts.


Country-of-Origin Effects on E-Commerce

Dr. Francis M. Ulgado, DuPree, Georgia Institute of Technology, Atlanta, GA



This paper examines the Country-of-Origin effects in an e-commerce environment.  In addition to Country-of-Brand and Country-of-Manufacture effects, the paper investigates the presence and significance of Country-of-E-commerce Infrastructure.  It develops hypotheses regarding such effects amidst varying customer and market environments, such as business vs. consumer buyers, levels of economic development and product type, and proposes a methodological framework to test the hypotheses.  Recent years have witnessed a rapid increase in the range of multimedia technologies available internationally.  Among them, the Internet technology has dramatically changed the shopping environment for individual consumers and businesses throughout the globe.  The number of consumers worldwide purchasing through business-to-business as well as business-to-consumer e-commerce media ("e-commerce hereafter) has been skyrocketing these days.  However, preliminary statistics indicate that the level of growth and development of internet and e-commerce infrastructure varies across countries and has generally lagged behind the United States.  Meanwhile, current research has also indicated the continued prevalence of country-of-origin effects on consumer perception on products or services that they purchase.  This study investigates the presence and significance of country-of-origin effects on buyer perception in the e-commerce environment.  While, country-of-brand and country-of-manufacture dimensions have been investigated in the past, this paper adds country-of-e-commerce infrastructure effects.  These three variables are selected to be examined under different business-to-business, business-to-consumer, and level of development environments.  The size of the worldwide market for e-commerce was about 66 billion dollars in 1999 and is expected to grow to about 1 trillion dollars this year.  In the U.S. alone, this is expected to reach $33 billion by the end of this year (Nielsen//Net Ratings Holiday, E-Commerce Index, 1999).  While this significant global growth is widely expected and documented, it has also been observed that the rest of the world lags behind the United States.  In contrast to the U.S. for example, regions such as Asia, Latin America, and Eastern Europe are behind in development and growth of e-commerce in terms of infrastructure, buyer acceptance, and use.    Moreover, different countries themselves also exhibit varying degrees of growth and development relative to their neighbors in the same region. Even amongst developed countries such as Canada, Japan, and Western European nations, the U.S. remains far ahead of the game.  It is therefore not surprising that according to recent studies, U.S. web sites such as Yahoo! or Amazon dominate the international market.  Similar studies also indicate that in general, business-to-business e-commerce so far exceeds business-to-consumer transactions on the internet. In addition, while the internet may be seen as a marketing medium or tool that would globalize business, the literature suggests that the varying cross-cultural environments across countries in terms of the legal, political and cultural variables have resulted in different purchase behaviors and attitudes towards e-commerce.  In countries such as China, government regulation and intervention in e-commerce has been more significant, resulting in a relatively more politically influenced and legally constrained commercial environment.  The uncertainty and risk resulting from such a situation has hampered the development of the infrastructure and the attitudes of consumers. Finally, even in "wired" cosmopolitan Hong Kong, cultural traditions and preferences have hampered U.S.-proven e-commerce formats such as online grocery shopping.  The lack of supporting financial infrastructure in other countries has also hampered the development of e-commerce.  For example an Asian-based online toyseller has resorted to processing online orders through U.S. banks since no local banks are willing to do so.  In more economically developed areas of the world, most consumers in European countries are legally required to pay by the minute while online, significantly influencing their ability to participate in e-commerce. Even in communications technology-savvy countries such as Sweden, Denmark and Finland most households use the internet primarily for activities such as e-mail, information, and working at home, and significantly less for e-commerce. In other countries that do exhibit e-commerce activity research has found varying consumer behavior.  For example, web-site preferences have been shown to vary across e-consumers of the United Kingdom, France and Germany. Given such varied and complex cultural, legal and political influences on the e-commerce experience in other countries, it is not surprising that internet-based companies have had difficulty expanding their markets internationally.  For example, London-based, a sports and urban fashion "e-tailer", has had its European expansion of its multicultural brands to a sophisticated clientele stymied by different software, supply chains, currencies, EU regulations, and tax and customs laws. One issue that such international e-marketers face is the possible influence of country image on their potential buyers.  The influence of the perception of a country by a consumer can significantly affect their perception of a product or service associated with that country and the resulting buyer behavior.  Such influence has been termed "Country-of-Origin" effects on consumer perception.  Various research have offered a range of definitions explaining COO (e.g., Bilkey and Nes 1982; Han and Terpstra 1988; Johansson, Douglas, and Nonaka 1985; Thorelli, Lim and Ye 1989; Wang and Lamb 1983).  However, as we have increasingly found the separation of manufacturing or assembly location from the country with which the firm or brand is associated with, the term "origin" has become vague. 


The FASB Should Revisit Stock Options

Dr. Ara Volkan, State University of West Georgia, Carrollton, GA



Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, firms to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. Moreover, FAS 123 did not require the quarterly calculation and disclosure of the option expense. The primary purpose of this paper is to highlight the flaws in FAS 123 and explore alternative methods of accounting and reporting for stock options that address these flaws. In addition, two studies that evaluate the impact of these alternatives have on annual and quarterly financial statements are analyzed. Finally, accounting procedures are recommended that will report more reliable and useful information than current rules provide. Given that two Congressional Subcommittees are intending to propose fair valuation and expensing of stock options when they finish their investigations into the Enron debacle, the content of this paper is both timely and relevant. Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. Subsequent pronouncements, Accounting Principles Board Opinion No. 25 (APBO 25) issued in 1972 and Financial Accounting Standards Board (FASB) Interpretation No. 28 issued in 1978, continued the tradition of allowing the fixed stock option plans avoid recording compensation expense as long as the exercise price was equal to or exceeded the market price at the date of grant.  In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, companies to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. The firms that chose not to follow the recommendations of FAS 123 could continue to use the requirements of the APBO 25. These firms had to disclose the pro forma impact of FAS 123 requirements on their annual earnings and earnings per share (EPS) in the footnotes of their annual reports. However, FAS 123 did not require the quarterly calculation and disclosure of the option expense.  The primary reason the FASB did not require companies to record an option expense was pressure from the business community and Congress. Because of this pressure, the FASB reversed the accounting proposals contained in its Exposure Draft – Accounting for Stock-Based Compensation (ED) issued in 1993 and opted for a realization and footnote disclosure approach in FAS 123 as opposed to the realization and financial statement recognition approach that was advocated in the ED. Another major obstacle to recognition was the narrow scope of the definitions of assets, expenses, and equity provided in Statement of Financial Accounting Concepts No. 6 (SFAC 6). Thus, the FASB was not entirely successful in delivering on its stated intention to provide neutral accounting information to assist users in assessing investment opportunities.  To its credit, the FASB has recognized that SFAC 6 should be revised to address certain transactions that under current standards are not properly measured, recorded, and reported. Thus, in a pair of October 27, 2000 exposure drafts (file reference numbers 213B and 213C) concerning accounting for financial instruments with characteristics of liabilities, equities, or both, the FASB noted its intention to amend the definition of liabilities to include obligations that can or must be settled by issuing stock. The reporting requirements of APBO 25 can result in vastly different treatments for compensation packages that have similar economic consequences to both the employer and the employee. For example, a company that issues stock appreciation rights (SARs) must record compensation expense for any increase in the market value of the stock between the grant date and the exercise date, whereas, no compensation expense is recorded for a fixed employee stock option with similar cash flow consequences.  The primary purpose of this paper is to highlight the flaws in FAS 123 and explore alternative methods of accounting and reporting for stock options that address these flaws. In addition, two studies that evaluate the impact of these alternatives have on annual and quarterly financial statements are analyzed. Finally, accounting procedures that will report more reliable and useful information than current rules provide are recommended. The following sections briefly discuss the current requirements for accounting for stock options and evaluate the other approaches previously suggested by the FASB. Next, alternatives are offered that are superior for measuring compensation expense on both annual and quarterly basis and are consistent with accounting for other expenses. A recent article in the Wall Street Journal (June 4, 2002) disclosed that stock options now equal more than half of top CEOs’ compensation. For the top 200 industrial and service companies ranked by revenues, 58% of executive compensation for 1999-2001 was in the form of stock options. It is clear that stock option plans are valuable tools for most companies. If the option has a value as of grant date, the value should be an expense to the company, reducing both net income and EPS. Yet under the APBO 25 and the popular alternative allowed under FAS 123 requirements currently in force, assuming an exercise value at or above the market value of the stock at the grant date, no expense would be recorded. On rare occasions, when the market price exceeds the exercise price at the grant date and a compensation expense arises from the issuance of a stock option, the employer must record the difference as a debit to deferred compensation expense and allocate it as compensation expense to the periods in which the services are performed. However, changes in stock prices during the service period are not taken into consideration.  From an employee’s perspective, the option takes on value when the market price of the stock exceeds the exercise price. From the firms’ perspective, costs are incurred when stock is issued to the employees at the reduced price since the firm gives up cash it could have received by selling the shares in the market instead of to employees. Thus, future market conditions are relevant to both the employee and the employer. Attempts to measure future costs without incorporating the most current market conditions can result in poor estimates. In comparison, for variable plans, such as SARs, which entitle an employee to receive cash, stock, or a combination based upon the appreciation of the market price above a selected, per share price over a specified period, the total compensation expense is determined at the measurement date, which, for SARs, is generally the exercise date. Therefore, between the grant date and the exercise date, the compensation expense must be estimated. The estimated compensation expense and associated liability are determined on a quarterly basis by multiplying the number of SARs by the difference between the market price of the stock and the SAR base price. Amortization is required over the lesser of the service or vesting period. However, after the service or vesting period ends, compensation expense continues to be adjusted based on fluctuations in the market price until the SARs expire or they are exercised.  A comparison of these two stock compensation plans (fixed stock options and SARs) indicates that while the economic and cash flow consequences appear to be identical (i.e., unfavorable impact on cash flows either in the form of payments to the holder or issuance of stock at a price lower than the market price), their accounting requirements and resultant quarterly income statement and balance sheet effects differ substantially. In January 1986, the FASB agreed that the compensation cost of stock options and stock award plans should be measured at the date of grant by using a Minimum Value (MV) model. However, the FASB reversed itself six months later and agreed that costs should be measured using a fair value model and at the later of the vesting date or the date on which certain measurement factors, including the number of shares and purchase price, would be known.  The FASB initially embraced the MV method because it was believed to be conceptually sound, objectively determinable, and easily computed. The MV of an option is defined as the market price of the stock minus the present values of the exercise price and expected dividends, with a lower bound of zero.


How the World of Marketing Channels is Changing

Dr. Ajay K. Kohli, Emory University, Atlanta, GA

Dr. Goutam Challagalla, Georgia Tech, Atlanta, GA

Dr. Bernard Jaworski, Monitor Consulting Company, Boston, Massachusetts

Robert Lurie, Monitor Consulting Company, Boston, Massachusetts



The world of marketing channels is changing.  A deepened focus on the customer experience, micro segmentation, and the use of technology is leading to two key developments.  First, an increasing number of companies are moving toward using flexible channel systems. Microsoft’s bCentral is an example of a company that has built a flexible channel system.  The second development in the world of marketing channels is that companies are reaching customers using multiple media.  These media afford marketers the opportunity to reach customers in the way they would like to be reached, and deliver an ever more customized buying experience to them.  Avon is an example of a company that has moved from using just one way to reach customers to multiple in a span of few years. These two developments create a host of new challenges for marketers. They must decide whether to use flexible channel systems or use vertically integrated distributors/retailers. What criteria should be used to make these choices? And if a flexible channel system is used, what organizational changes should be made in order to work effectively with channel partners? What new skills and resources are needed by a marketer to work effectively with members of a flexible channel system versus vertically integrated distributors? The spread in the use of new media raises the difficult issue of how a marketer can integrate all of the various media to deliver an experience customers can actually enjoy. What does channel integration mean anyway, and how should this integration be realized?


Institutional and Resource Dependency Effects on Human Resource Training and Development Activity Levels of Corporations in Malaysia

Dr. Zubaidah Zainal Abidin, Universiti Teknologi Mara, Shah Alam, Malaysia

Dr. Dennis W. Taylor, University of South Australia, Adelaide, Australia



This study considers managerial motives and orientations affecting decisions about levels of employee training and development (T&D) activities. Specifically, arguments drawn from institutional theory and resource-dependency theory are used to articulate variables that seek to uncover these managerial motives and orientations. Using listed companies in Malaysia, a field survey was conducted amongst two groups of managers deemed to have influence on the determination of annual T&D budgets and output targets, namely, human resource (HR) managers and finance managers. The results reveal that T&D activity levels are affected by institutional-theory-driven components of management dominant logic and by perceived organizational resource dependencies on employees versus shareholders. But there are contrasts in the significance of these variables as perceived by HR managers compared to finance managers. In Malaysia, there is a relatively high level of investment in human resources (mainly training and development expenditure) by companies. The federal government’s Human Resource Development Fund (HRDF) was established in 1993. Its purpose has been to encourage and help fund human resource investment activities by companies. Through reimbursements of eligible T&D expenditures, the HRDF scheme in Malaysia provides corporate managements with a strong incentive to allocate budget expenditure to T&D programs and to report on numbers of employees trained and developed. But Malaysian companies have not been consistent in taking advantage of this government scheme. This is evidenced by variability in the ratio of levies collected to claims paid by the HRDF on a company-by-company basis, suggesting that corporate managements treat T&D activity levels as quite discretionary in their planning and annual budgeting.  What factors influence management’s choice of the annual T&D activity level? This study will focus on whether the level of T&D activity is determined by variables embedded in institutional and resource-dependency theories. The motivation for addressing this research question is that insights can be provided about management behaviour in an operating functional area of the company (i.e., investment in human resources) that has economic or human consequences of relevance to employees, shareholders and government oversight bodies. To employees, T&D programs provide the means of maintaining their own competitiveness within their employer organization by improving knowledge, skills and abilities, especially if their current workplace environment is dynamic and complex (Lane and Robinson, 1995). To shareholders, T&D expenditure is seen as reducible in times of economic stringency in order to meet short-term profit targets, but the importance of knowledge and intellectual capital is also recognized as critical in business success (Pfeffer and Veiga, 1999). To government oversight bodies (such as the HRDF body in Malaysia), levels of corporate T&D are broadly viewed as improving the value of the country’s human capital (Huselid, 1995). This study empirically investigates the relationship between corporate T&D activity levels and the factors which influence the thinking of top HR managers and finance managers involved in the setting of T&D budgets and T&D output targets in their company. The study is confined to an investigation of large listed corporations in Malaysia, and to two players in the top management team (i.e., the HR manager and the finance manager/controller).  Institutional and resource dependency theories are invoked in this study. These theories are widely used as underlying perspectives to inform empirical research into managerial behaviour. Given that both the top finance manager and HR manager will have a substantial influence on the determining of their corporation’s annual T&D budget and targets, their more broadly developed motives and orientations would be expected to have a causal relationship to their company’s actual T&D budgets and outputs.  Ulrich and Barney (1984) argued that there was often an under-examining of many important similarities and differences in organization behaviour research due to a lack of comparison and integration among perspectives of organizations. They contended that a multi-perspective approach to organizational research would help to more fully explain certain behaviors and their implications. As Hirsch et al. (1987) claimed, the strength of organizational research is its “polyglot of theories that yields a more realistic view of organizations”.  The two perspectives of institutional theory and resource dependency theory are two alternative ways of thinking about influences on T&D activity level decisions. Each perspective has a different frame of reference. In the institutional perspective, the phenomenon of isomorphic behaviour that tends towards legitimization of management’s actions is the key frame of reference. As managers engage in isomorphic behaviour, the accumulation of legitimacy concerns begins to take on its own structure.  In the resource dependency perspective, the structure of concern is the organization and its bundle of agency relationships. In this sense, there is an aggregating relationship between organizations and their resource dependencies. Resource dependencies are bundles of resource-providers (particularly human and financial resource-providers) that have certain characteristics in common. Explanatory variables arising from the isomorphic dimensions of institutional theory, are identified by Kossek et al. (1994) in the notion of managerial dominant logic (MDL). The concept of management dominant logic, which was first developed by Prahalad and Bettis (1986), includes managerial practices, specific skills used by key actors, experiences stored within the organisation and cognitive styles used to frame problems in a specific ways (Bouwen and Fry 1991). According to Prahalad and Bettis (1986), a dominant logic can be seen as resulting from the reinforcement of results from doing the ‘right things’ with respect to a set of business activities. In other words, when top management effectively performs the tasks that are critical for success in the core business they are positively reinforced by economic and social successes. This reinforcement results in top management focusing effort on the behaviours that led to their success. Hence they develop a particular mindset and repertoire of tools and preferred processes. This, in turn, determines the approaches that they are likely to use in resource allocation, control over operations and approaches to intervention in a crisis.  Kossek et al. (1994) used this notion to examine HR manager’s institutional pressures to support the adoption of employer-sponsored childcare as a form of organisational adaptation to change.  They found three dimensions of MDL. These MDL dimensions were labelled ‘management control’, ‘environmental’ and ‘coercive’. These dimensions form an overall management orientation toward employer-sponsored childcare.  Kossek et al’s study supports previous research on the link between work practices and institutional influences (for example, Tolbert and Zucker, 1983; Eisenhardt, 1988; Scott and Meyer, 1991). But no previous study has directly tested the belief that MDL variables affect managers’ decisions about T&D activity levels. Nevertheless, it is reasonable to speculate that such relationships may exist.


Back-Testing of the Model of Risk Management on Interest Rates Required by the Brazilian Central Bank

Dr. Herbert Kimura, Universidade Presbiteriana Mackenzie and Fundação Getulio Vargas, São Paulo, Brazil

Dr. Luiz Carlos Jacob Perera, Universidade Presbiteriana Mackenzie and Faculdade de Ciências Econômicas, Administrativas e Contábeis de Franca FACEF, São Paulo, Brazil

Dr. Alberto Sanyuan Suen, Fundação Getulio Vargas, São Paulo, Brazil



The model proposed by the Brazilian Central Bank for interest rate positions represents the first attempt of the regulator to define a quantitative methodology to assess market risk of portfolios. Since the model allows discretion in the establishment of different criteria for interpolation and extrapolation of interest rates, it is possible that banks may reduce their capital requirements simply by using different methods of defining the term structure. This study will verify the impact of such methods in the assessment of interest risk, specially in the case of the Brazilian high volatile market. In addition, it will be discussed, through the presentation of simulations, if the model defined by the regulator can influence the willingness of financial institutions to assume more credit risk by lending to counterparts with poor credit ratings and by making more long term loans.  Following guidelines suggested by the Basle Committee, the Brazilian Central Bank has regulated rules for capital requirements in function of the assumed market risk. Brazil initiated efforts to set an specific regulation related to the market risk with the emission of the legislation of the risk evaluation of positions exposed to the fixed interest rates fluctuation, according to the parametric model of variances and covariances.  Having in mind the complexity of the risk factors in the Brazilian economy clearly subject to the major fluctuations of the market parameters, it is important to the Brazilian financial institutions to implement tools to evaluate risks, allowing a better estimation on the potential losses.  Exemplifying the Brazilian economic scene, despite the relative success of the stabilization plan implemented in 1994 seeking to reduce inflation that reached more than 80% in March of 1990, the interest rate is still one of the highest in the world (around 20% per year), having reached 47% per year during the 1997 Asian crisis. Besides, in 1999 the Brazilian currency was devaluated almost in 50% in only one month, due to the investors’ crisis of confidence of the conduction of the economic politics.  In such great volatility context, the major part of the Brazilian banking segment has implemented several methodologies to analyze the risk measurement both through the value-at-risk measures and through projections in stress tests. In function of the Brazilian economy specificities, the market practices have been more demanding in some requirements than the own international regulation. For instance, while the Basle Committee demands quarterly changes of the variance-covariance matrix, the Brazilian Central Bank determines daily risk parameters to the prefixed rates. The own financial institutions, in the majority of cases, revaluate daily the statistical parameters of risk, incorporating daily changes to the correlations between variables associated to the risk factors of the market.  In this research, from the configuration of portfolios with financial assets and data from the Brazilian markets, it will be estimated maximum potential losses, using those models and parameters of the variance and covariance value-at-risk set by the Brazilian Central Bank. Seeking to verify if the Brazilian regulation of the market risk reflects adequately the potential fluctuations of the term structure of the interest rate, it will be performed, in this study, back-testing procedures on the required methodology by the Brazilian Central Bank.  Thus, the article tries to identify if the mathematical model imposed by the Brazilian Central Bank to calculate the value-at-risk is conservative or aggressive with relation to the effective losses in function of the Brazilian interest rate fluctuation. In March of 2000, the Brazilian Central Bank issued the Document 2972, which sets the calculation rules to determine one of the terms comprising the equity demanded from the financial institutions operating in Brazil, in function of the interest rate risk level of the market in national currency. Such parcel depends on the value-at-risk, calculated according to the parametric model of specific variances and covariances, whose calculation and parameter procedures are publicly available, assuring to the financial market the transparency regarding the rules adopted.  The mathematical model of the risk calculation of fixed rate positions involves primarily the establishment of the cash flows expected by the financial institution. Such procedure requires an initial processing of information having in mind that some financial instruments are stored in the database considering their face value while other operations are calculated from the present value accrued by the interest rate specified in the funding or application operation. Besides, it is important also to identify which positions are exposed to the interest rate variation, as for instance, fixed legs of future contracts.  The mark to market is accomplished with spot interest rates built-in in interest rate futures or swap contracts of post-fixed rates for pre-fixed rates traded in the São Paulo Commodities and Futures Exchange. Due to the simplifying purpose of the model and the Brazilian market features, it is presumed the hypothesis that the forward rate between the maturity of future contracts and swaps is constant, and for terms higher than 2 years the spot interest rates is equal to the 2 years’ interest rate. Besides, having in mind the market practice, the interest rates are effective for a 252 working days term, equal to one year.  Thus, to bring the present value of a cash flow occurring in a T moment, it is used the following interest rate: where R0 is the rate of the 1 day inter-banking deposit’s certificates (CDI) and Rj.j+1 is the forward term implicit by the j-th and the j+1-th CDI’s maturity, future or swap contracts and where Tj is the maturity, in working days, of the specified operations.  


Analysis of Dynamic Interactive Diffusion Processes of the Internet and New Generation Cellular Phones

Dr. Kaz Takada, Baruch College/ CUNY, New York, NY

Dr. Fiona Chan-Sussan, Baruch College/ CUNY, New York, NY

Dr. Takaho Ueda, Gakushuin University, Tokyo, Japan

Dr. Kaichi Saito, Nihon University, Tokyo, Japan

Dr. Yu-Min Chen, J. C. Penney, Dallas, TX



NTT DoCoMo has been experiencing an unprecedented success with its i-mode cellular phone services in Japan.  In this study, we analyze the diffusion of the i-mode and other second generation (2-G) cellular phones, and its dynamic interactive effect on Internet diffusion is modeled and empirically tested with the diffusion data.  The empirical results clearly support the hypothesized relationship between the two indicating that in the short term the rapid diffusion of the 2-G has an negative effect on the diffusion of Internet.  However, we contend that in the long run the diffusion of these technologies should exert positive and complimentary effect on each other.  The introduction of NTT DoCoMo cellular phone i-mode services in 1999 has brought Japan to become the number one mobile commerce (m-commerce) nation by 2001.  The success of the i-mode service is of such a phenomenon that every major newspaper and magazine has had at least one article written about its success in the last twenty four months (Barrons 2000; Business Week 2000; Fortune 2000, among others).  How does the i-mode phenomenon affect the traditional Internet diffusion through the use of personal computers (PC), and how does it affect the future of Internet diffusion?  The i-mode represents a new generation of the cellular phone, and it is capable of performing various functions beyond the traditional voice based cellular telephones.  The major characteristics of the i-mode cellular phone according to NTT DoCoMo are that, with the i-mode phone, people can access online services including balance checking/fund transfers from bank accounts and retrieval of restaurant/town information.  In addition to conventional voice communications, users can access a wide range of sites by simply pressing the i-mode key.  The service lineup includes entertainment, mobile banking and ticket reservations.  The i-mode employs packet data transmission (9600bps), so communications fees are charged by the amount of data transmitted/received rather than the amount of time online.  The i-mode is compatible with Internet e-mail and can transfer mails between i-mode terminals. Packet transmission allows sending and receiving of e-mail at low cost.  The i-mode, although dominant in the market, is not the only service. Other services from different service providers offer the cellular phones services with comparable features and capabilities.  In this study, we analyze the effect of the introduction of this new second generation (2-G) cellular phones in Japan.  Specifically, our research question is posited as follows:  Does an explosive growth of the second generation cellular phones lead to stimulating the adoption of Internet access among the Japanese households, or suppress its adoption?  Diffusion research in marketing has a rich literature.  Since Bass (1969) published his seminal work on the new product growth model, hundreds of papers have been published in the leading marketing and management journals (Mahajan, Muller, and Bass 1991 for their comprehensive review and references therein), and various modifications and refinements have been made to the original Bass model.  The studies demonstrate that the Bass model has a superb forecasting capability for durable goods with a limited data available.  More importantly, the model can provide valuable information on diffusion processes such as the coefficients of external influence (p) and internal influence (q) on diffusion processes and of potential market size (m).  An interactive effect of diffusion process between the second generation cellular phones and Internet, which we are to analyze in this study, is rather unique in the diffusion literature, and not many studies have tackled this problem.  Norton and Bass (1987) analyzed the substitution effect of the diffusion processes of the successive generations of high technology products.  Their comprehensive substitution effect model is capable of capturing, for example, Intel’s introductions of CPUs with a remarkable precision.  Their study has implications for our study in so far as diffusion processes of different products or technologies have a substitution effect as delineated by the research question aforementioned.  However, their study assumes that the substitution effect occurs among the innovations of the improved technologies.  Our study, on the other hand, deals with two different innovations, thus the nature of substitution effect of our study is vastly different from theirs. The new product growth model is proven to be a very effective model to analyze diffusion processes across different countries and cultures.  Since Gatignon, Eliashberg, and Robertson (1989) and Takada and Jain (1991) applied the Bass model to international marketing data, numerous studies have attempted to analyze diffusion processes of a variety of products and services in international marketing (Ganesh, Kumar, and Subrmaniam 1997; Helsen, Jedidi, and DeSarbo 1993; Kalish, Mahajan, and Muller 1995; Putsis, Balasubramanian, Kaplan, and Sen 1997; Tellefsen and Takada 1998; Dekimpe, Parker, and Sarvary 2000, among others).  Takada and Jain (1991) analyzed diffusion processes of durable goods in the Pacific Rim countries including Japan where the i-mode was first introduced.  They found two major effects of cross-country diffusions, namely the country and time effects.  The country effect indicates that diffusion rate in the high context culture (Hall 1981) is faster than in the low context culture.  This implies that the 2-G cellular phones in Japan are expected to diffuse faster than in the low context culture such as European countries and the United States.  The time effect refers to the fact that diffusion in the lead country, meaning the country where innovations are first introduced, is slower than that in the lagged country. This implies that diffusion of the 2-G cellular phones in the countries rather than Japan tends to be faster than that in Japan.


Franchise Development Program: Progress, Opportunities and Challenges in the Development of Bumiputera Entrepreneurs in Malaysia

Dr. Sallehuddin Mohd Noor, Malaysia National University, Malaysia

Dr. Norsida Mokhtar, Malaysia National University, Malaysia

Dr. Ishak  Abd Rahman,  Malaysia National University, Malaysia

Dr. Jumaat Abd Moen, Malaysia National University, Malaysia



Issues related to the involvement of Bumipiteras in the development of the country, mainly in the business sector has received attention from the ruling government since the country’s independence. Before independence, the policy used by the English has caused Bumiputeras to be left far behind in many aspects, when compared to other races. In order to solve this problem, the government launched the New Economic Policy (NEP) which focuses to eliminate poverty and to reorganize the many races in Malaysia. The era of NEP is replaced by the National Development Policy (NDP) that aims to continue where NEP left of. Under NDP, the government designed programs that increased the numbers of Bumiputeras in the trading sector through the Bumiputera Community Trade and Industry Plan (BCTIP). In parallel to the outlined strategy in the resolution of the Third Bumiputera Economic Congress held in 1992, this paper will attempt to evaluate and analyze the achievements and opportunities in the franchise development program that is a vital mechanism which encourages Bumiputera involvement and contribution to the nation’s economy. This paper will also attempt to view the main challenges faced by Bumiptera entrepreneurs in the franchise development program. Issues relating the involvement of Bumiputera and country development started to gain the ruling government attention since independence was achieved.  English policies before independence clearly left the Bumiputeras behind in many areas when compared to other races. Realizing the fact that national unity could only be achieved if the riches of the nation are equally shared among all races, the Bumiputera Economy Development agenda was given attention in the economic development plan of the nation. The involvement of the government in this area started since the first Prime Minister, Tengku Abdul Rahman and has continued until today. In realizing that the pattern of wealth distribution that is unequal will effect national unity, such in the May 13 Tragedy in 1969, the government designed the New Economic Plan (NEP) (1970-1990). This plan aims to eliminate poverty and restructure the community in Malaysia. Although NEP does not state the exact number of entrepreneurs to be produced, the public statement to see  30% national equity ownership is a step taken by the government to encourage active involvement of Bumiptera in the trade and industry sector. Unfortunately at the end of the NEP in 1990, the Bumiputera only managed to accumulate 20.1% of the nation’s wealth. The main reason for not achieving the 30% goal is the economic crisis that hit Malaysia in the middle of 1980.  The beginning of the National Development Policy (NDP) marks the end of the NEP era. Again, the government designed special programs to increase the number of Bumiptera in the trade sector. The Bumiputera Community Trade and Industry Plan (BCTIP) were formed to achieve this goal. In the Seventh Malaysian Plan, the government has devised new strategies to develop the BCTIP. Several programs are executed in order to achieve NDP’s objective such as establishing small and medium size industries (IKS) that has competitive and enduring qualities in a strategic economic sector. To achieve this objective, several strict conditions are enforced to those interested and eligible to join BCTIP.  In parallel with the strategy outlined in the 2nd Bumiputera Economic Congress resolution, 1992, this paper attempts to evaluate and rethink the achievements, opportunities, challenges in the Franchise Development Program (FDP) as an important mechanism that encourages Bumiputeras to take an active role in the development of the nations economy.  The franchise trading system is identified as one of many shortcuts used to maximize the number of Bumiputera entrepreneurs and businessman, which in turn would cause Bumiputera middle class numbers to increase. The Prime Minister while launching the FDP on 27 January 1994 also noted that: “Today’s business and trade world is becoming more challenging, more competitive and more advance. Small self owned businesses will at random not be very fruitful. Today we live in a world of ‘giants’. Hence, the Bumiputeras have to enter a very large business and entrepreneurship field and have to be ready to bare reasonable risks. To excel more, any business has to be handled wisely, systematically, efficiently and widely as if creating branches or networks. One of the approaches taken for good businesses is through the franchise system. This system might allow Bumiputera involvement with no extremely high risk.”   In this context, the government developed the Franchise Development Program as a strategy in the development of a society of Bumiputera entrepreneurs that has the ability to withstand and excel in the business world. With the involvement of the private sector, under the Malaysian Privatization concept, opportunities arise for the Bumiputera to set up franchise businesses that stress on uniformity and quality of the goods and services provided. However the situation, only those that are determined, independent, strong, disciplined and wise in management skills could ensure the success of a franchise business.


Developing a Computer Networking Degree: Bridging the Gap Between Technology and Business Schools

Dr. Karen Coale Tracey, Central Connecticut State University, New Britain, Connecticut



The idea of integrating curriculum and collaboration between academic disciplines is not a new concept in higher education. Interdisciplinary learning, teaching, and curriculum came to the forefront as part of the progressive educational movement of the early twentieth century. Multidisciplinary and interdisciplinary programs can foster, accelerate, and sustain constructive change in academia and student learning (Ellis&  Fouts, 2001).  The purpose of the paper is to describe the proposal for the Bachelor of Science in Computer Networking Technology degree at Central Connecticut State University (CCSU). CCSU is a regional public university that serves primarily residents of Central Connecticut.  It is one of four regional public universities offering higher education in the state.  CCSU’s location in the center of the state means that the entire population of the state is within 75 miles of its location in New Britain.  Connecticut is one of the smallest states in land area. Its land area of 4845 square miles makes it the third smallest state in terms of area (World Almanac, 2002). The greatest east-west distance in the state is approximately one hundred miles.  The greatest north-south distance is approximately seventy-five miles.  Connecticut’s population of approximately three million makes it the twenty-first smallest state in terms of population (U.S. Bureau of Census, 2000).   Its population growth during the last decade (1991-2000) was 3.6 percent, which is noticeably less than the 13.1 percent growth in the U.S.  CCSU is located approximately 2-3 hours from Boston and New York City.   CCSU is divided into five academics schools:  Arts/Sciences, Business, Professional Studies, Technology, and Graduate. CCSU enrolls approximately 12 thousand students.  About two thousand of these students are enrolled in the Business School and 900 in the School of Technology. Most CCSU students (about three quarters) are undergraduate students (CCSU, 2002).    Ninety-five percent are Connecticut residents.   Twenty-two percent live on campus.   Sixty-eight of the full-time students receive need-based financial aid (Morano, 2002).  There is not an agreement on the meaning of multidisciplinary interdisciplinary programs, but Beggs (1999) provides a guide.  He describes a discipline as a body of knowledge or branch of learning characterized by an accepted content and learning. Research, problem solving, or training that mingles disciplines but maintains their distinctiveness is multidisciplinary. Practically speaking, faculty from at least two disciplines who work together to create a learning environment and incorporate theory and concepts from their respective academic disciplines can be categorized as interdisciplinary.   The creation of an international field course is a platform for students from different disciplines to interact. The result is a broad picture of discipline in an international content.   Researchers have found much strength in interdisciplinary curriculum (Anderson, 1988).  Interdisciplinary curriculum improves higher-level thinking skills and learning is less fragmented, therefore students are provided with a more unified sense of process and content. Interdisciplinary curriculum provides real-world applications and team building, hence heightening the opportunity for transfer of learning and improved mastery of content.  Interdisciplinary learning experiences positively shape learners’ overall approach to knowledge through a heightened sense of initiative and autonomy and improve their perspective by teaching them to adopt multiple points of view on issues.  Ellis and Fouts (2001) summarized the benefits of interdisciplinary curriculum: The interdisciplinary curriculum improves higher-level thinking skills.  With the interdisciplinary curriculum, learning is less fragmented, and therefore students are provided with a more unified sense of process and content.  The interdisciplinary curriculum provides real-world applications, hence heightening the opportunity for transfer of learning. Improved mastery of content results from interdisciplinary learning.  Interdisciplinary learning experiences positively shape learners' overall approach to knowledge through a heightened sense of initiative and autonomy and improves their perspective by teaching them to adopt multiple points of view on issues. Motivation to learn is improved in interdisciplinary settings.  The proposal for the BS in Computer Networking Technology is in response to the rapid changes “high technology” fields and the high demand for information technology workers.  Campuses and schools are increasingly wired, as students and teachers look to computers and the Internet to supplement other methods of teaching and learning. Technology is also becoming an important part of business and education administration, as networks provide a means to manage these enterprises.  Technology is changing the face of education. Today’s students want to learn skills that will make them highly marketable in the Internet Economy. As a result, there is increased emphasis on skills development as well as on gaining knowledge and understanding.  Industry predicts that there will be a shortage of approximately 350,000 information technology workers.  Listed below are summaries found on the Internet that also provide statements on the critical need for information technologists: Central Connecticut State University and the Department of Computer Electronics and Graphics Technology are meeting the needs of the State of Connecticut to fill many of the positions listed in the “Status of Connecticut Critical Technologies Report” (March 1, 1997).  In this report, Information Technology workers are identified as a necessity to support aerospace and manufacturing industries. 


Intellectual Property Rights and the Attributes of the Market

Dr. Mohammed Boussouara, University of Paisley, Scotland, UK



It is now well established by academic scholars that property rights are a necessary requirement for the function of market-based economy. (Alchian & Demsetz, 1973, Drahos 1996) Over the last two centuries or so this principle has been extended to intellectual Property Rights (IPR) which include patents, copyrights, trademarks, brands etc..(Abbott Et Al, 1999) That importance is shown by the monetary and competitive gains generated by brand equity.   However the definition and protection of intellectual property rights is also one of the most complex issues that is the subject of international negotiations because its acceptance has not always been universal.  (May, 2000). Criticisms of the extension of IPR include, among other things, its impact on free trade and competition (Maskus2000, Maskus & Lahoual 2000).  This paper argues that recent court cases and agreements like TRIPS (Gervais, 1998) may lead to the erosion of the fundamentals of property rights per se and by implications to the attributes of the market. Specifically it addresses the issues of competition and the rights of buyers and consumers.  The focus of the paper is trademarks and brands, in particular it addresses the issue of gray marketing and the implications for global marketing management (Clarke & Owens, 2000) and innovation of science based products like pharmaceuticals. (Rozek & Rapp, 1992, Grubb, 1999)  Finally, the paper will argue that it is far better for companies to use marketing tools, rather than courts, in order to protect their brand and trademarks equity.


Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2015. All Rights Reserved