The Journal of American Business Review, Cambridge
Vol. 1* Number 1 * December 2012
The Library of Congress, Washington, DC * ISSN 2167-0803
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Business Review, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 2167-0803 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Business Review, Cambridge is published two times a year, Summer and December. The e-mail: firstname.lastname@example.org; Website, www.jaabc.com Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals. You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase article (s): email@example.com
Copyright 2000-2018. All Rights Reserved
Predicting the Aggregate Economic Impact of Rural Community Events
Dipak Subedi, Economist, U.S. Department of Labor, Bureau of Labor Statistics, Washington D.C.
Dr. Olga I. Murova, Texas Tech University, Lubbock, TX
Rural tourism is an important source of revenues for rural communities. This study develops a model that can be used to predict aggregate economic impact of any rural community event on the local economy. Event characteristics and community characteristics are used to explain and predict this impact. Developed model explains 63 percent of total variation in aggregate economic impact of any rural event. Results show that variables event days, number of years of event, community investment, miles driven to the event, and community population have a positive and significant impact in explaining aggregate economic impact. Distance from major city and median family income have a negative and significant economic impact on a participating community. Several dummies for weather and event type are used in the model. Findings show that inclement weather adversely affects travel causing lesser turn out for the event; and among all types of events nature tourism brings most dollars to the community. Availability of such model should be helpful in efforts to predict the outcome of a planned rural community event. Rural economies held up better during the recent recession than their urban peers (Henderson, 2009). This is happened due to several reasons: housing crisis was less severe in most rural areas, rural areas had lesser exposure to investment bank activities, and rural economies were involved in the activities that did not experienced a great reduction in demand. One of such activities is rural tourism. According to Brown (2002) and Lewis (1998) rural tourism has become a popular destination for tourists because it is less expensive as compared to other popular destinations and equally fun and relaxing as well. Rural communities are isolated communities in open country with low population density. The USDA’s office of Rural Development defines rural by various population thresholds. The 2002 Farm Bill defined rural area as “any area other than (1) a city or town that has a population of greater than 50,000 inhabitants, and (2) the urbanized areas contiguous to such a city or town” (Farm Bill, 2002). An estimated 56 million Americans reside in rural areas, which comprised of 80 percent of the country's land and 2305 counties (Whitener and McGranahan, 2003) making up an important part of the overall US economy. In Texas, millions of people choose to live in small towns. As a result the state has the largest rural population in the nation. According to the 2007 Census, an estimated 13.8 percent of the state’s total population lives in rural areas. In terms of total area about 80% of Texas is rural and around 70% of all Texas counties are rural. Given the composition of rural areas in Texas, they hold a vital position in the state’s economy. In an effort to promote and expand rural tourism, many state and local entities initiate and develop rural community events. An issue they face in funding these community events is whether they will generate sufficient aggregate economic impact (AEI) to justify the public investment. Moreover, ensuring that viable events, which generate more aggregate economic impact, receive more funding is of increasing importance in justifying government expenditures in tourism. Texas Department of Agriculture is implementing several programs that are aimed to help and promote rural tourism. One of such programs is the GO TEXAN! Hometown STARS (Supporting Rural Tourism) program. The GO TEXAN! Hometown STARS is a program initiated by the TDA to enhance the growth and prosperity of rural Texas towns and counties. Rural communities, rural businesses and other organizations working for rural Texas are eligible to participate in this program. Eligible communities may apply and receive up to $10,000 in funds to promote tourism events in rural areas of the state. This program has been in action since 2003. To evaluate the program’s efficiency, a survey is conducted every year that includes a wide range of questions on the previous visits, plans to revisit, gender, miles traveled, personal expenses, and impact of media advertising. Based on responses to these questionnaires, the economic impact of GO TEXAN Hometown STARS is evaluated. Using data from multiple events, this study analyzes the impact of rural tourism events, where visitors spend their money based on given events' attractiveness, which in turn generates income, jobs and taxes for the region. We evaluate the total returns from 39 community events and seek to determine to what extent outcomes could have been predicted based upon event and community characteristics. This research seeks to provide a prediction model that can be used in predicting the outcomes of tourism events.
Does DFI by MNCs Improve Income Distribution for Society in Latin American Countries?
Steven Blair, Sam Houston State University, Huntsville, TX
Dr. Balasundram Maniam, Sam Houston State University, Huntsville, TX
Direct Foreign Investment (hereafter DFI) by multinational corporations (hereafter MNCs) has developed rapidly in recent times, and unindustrialized countries such as those in Latin America (hereafter LA), have enticed an increasing share of it. The scale and especially the scheduling of increases in DFI into LA countries have changed greatly. The governments of LA countries have been required to revise the policy situation faced by MNCs within their country to increase both the magnitude and timing of these investment increases. Latin American governments who would seek to invite DFI must find ways to reassure MNCs that their funds can grow. MNCs’ investments are sensitive to changes in the comparative prices of factors of production (labor and capital) and commodities; movements of assets and labor across borders and within countries; the environment of technological transformation and technological diffusion; the impact of global expansion on volatility and susceptibility. Significant differences in these conditions for MNCs exist in many LA countries. These differences show up most prominently in the magnitude and price of human capital, the institutional framework, the quality of authority, as well as the internal dynamics of institutional and sociopolitical conditions. The quantity and quality of human capital or workforce probably has the broadest impact on a MNCs investment decision. This study examines the key effects of globalization and how they impact the LA economy. This review looks at the impact on income distribution and poverty levels in several LA countries brought on by globalization through DFI. This study will reveal that the globalization of several LA countries has created winners and losers directly within their society, and this has severely affected vertical and horizontal inequalities within their society. The study will also review how several LA countries through governmental policies have created an air of openness through the liberalization of trade, investment regimes, and asset movements providing growth and welfare-enhancing effects for their societies. This study will review how liberalization has worked to increase the quantity and quality of human capital available in many LA countries. This review will also demonstrate the social disparity created as a byproduct of liberalization by the governments in many LA countries. Singh et al (2008) indicated that an important drawback of Neo-Classical financial theory has been its failure to satisfactorily determine what constitutes DFI. Neo-Classical economists have failed to distinguish DFI from portfolio investments and this has prevented Neo-Classical economists from having the data needed to accurately predict the influence of each of these as individual components on a country’s development. The data or information is further diluted when the two types of data or information derived from the data are grouped together by Neo-Classical economist causing them to view MNCs as arbitrageurs of capital, which results in the flow of capital being mainly influenced by changes in interest rates (Aoyama, 1996). Hymer (1976) began to look away from Neo-Classical financial theory by proposing that MNCs were evolving as global business organizations in reaction to imperfect market conditions. These imperfect market conditions could occur naturally from situations such as uncertainty about the behavior of suppliers, and quality of inputs or they could occur through the imposition of tariffs, and foreign exchange control levied by local governments to safeguard local industry (Rugman et al. 1985). Proponents of DFI argue that MNCs use DFI to increase market power through management and technology, the transfer of capital, and directing and removing competition within the market.
From Bankruptcy to the Stock Market
Dr. Gurdeep K. Chawla, National University, San Diego, CA
The recent economic downturn in the United States since 2008 has created a difficult environment for organizations. As a result, some companies have been struggling to stay in business. More specifically, these struggling companies were not able to operate with the credit crunch and limited consumer demand. Consequently, they filed for bankruptcy. According to BankruptcyData.com, 207 publicly traded companies filed for bankruptcy protection in the United States in 2009. In comparison with previous years, 2009 ranks as the third busiest year for corporate bankruptcy filings in the United States. BankruptcyData.com also determined that companies that filed for Chapter 11 bankruptcy protection in 2008 had a total of over $1 trillion in assets to their name. It is very difficult for companies to reorganize, survive, turnaround, and be profitable again after filing for bankruptcy. This paper focuses on some of the companies that filed for bankruptcy in 2008 or 2009 and they have been able to reorganize and reemerge as strong organizations. To accomplish this goal, company financial statements were analyzed to determine whether the bankruptcies could have been predicted by market participants, based upon the financial information provided by the companies. The paper also compares predecessor (pre-bankruptcy) and successor (post-bankruptcy) financial ratios to investigate the improvements made by the companies. Finally, the paper reviews how the companies were able to turn their bankruptcies around and become financially strong organizations again. It requires a great deal of planning and effort on the part of companies to file for bankruptcy under Chapter 11. The companies, in making their arguments that they are worth more alive than dead, are required to prepare and submit a plan which must be approved by a bankruptcy court to obtain protection from creditors under Chapter 11. The plan outlines the steps the company will take to make its operations profitable and provide competitive returns to its stakeholders. For example, a company might focus on improving sales, controlling costs, improving asset management, managing debt and/or creating a better capital structure. After the plan is approved, the companies have to execute the plan to make sure that the organization can turnaround and become a profitable enterprise again. This paper uses the Springdate model to analyze companies’ financial statements to determine if their bankruptcies could have been predicted. The paper takes a closer look at the financial statements of the companies by conducting financial ratio analyses and measuring improvements in performance in different areas, after the companies reorganized and emerged from bankruptcies. Finally, the paper reviews financial information to analyze the strategies adopted by the companies to improve their performance and be successful again. Five companies were selected at random from the list of companies that filed for bankruptcy in the last 2-3 years. All five companies have financially been doing well. The companies’ audited and published financial statements, as well as their financial information, was collected from their annual reports. Financial ratios, which have been used for over a hundred years to analyze the financial information of companies, were employed to investigate their profitability, liquidity, debt management, and asset management, before and after their bankruptcy.
Accounting Expert Systems and the Treatment of Uncertainty
Dr. Awni Zebda, Texas A&M University-Corpus Christi, Corpus Christi, Texas
Dr. Michelle McEacharn, University of Louisiana at Monroe, Monroe, Louisiana
Recent years have witnessed a rising growth pattern in the development and use of expert systems in accounting and auditing. A very important consideration in expert system development is the treatment of uncertainty. New approaches to handling uncertainty have been explored but, in accounting and auditing expert systems, probabilistic logic has been the typical solution method. This paper will briefly describe the expert system environment within accounting and auditing, illustrate a major criticism in the design of these systems, and introduce fuzzy logic as a potential solution to the weakness. In particular, this paper serves to encourage the use of fuzzy logic in accounting and auditing, arguing for its continued application to the discipline. Expert system development has been experiencing a rising growth pattern. The introduction of special programming languages and "shells" has contributed to the growing popularity of expert systems. Advances in computer capabilities and declines in computing costs have also helped the growth. The events have made expert system technology more accessible to users. Though the technology arose from the research and development laboratories of medicine and the military, expert system development has experienced a significant involvement in the business world. The design of XCON by the Digital Equipment Corporation provided evidence of the economic benefits in improved efficiency and quality associated with an expert system [Giarratano and Riley 1989]. Published reviews (e.g., Brown and Phillips , O’Leary and Watkins , Coakley and Brown ) show that expert system research has been substantial in accounting and auditing context. Many of the problems faced by accountants, auditors, and tax practitioners are ill-defined and unstructured problems which are well-suited for expert system application. There has been recent expert system development in such areas as capital budgeting, internal control evaluation, going-concern decisions, bankruptcy prediction, materiality judgments, and tax planning. This paper will describe the expert system environment within accounting, auditing, and tax, illustrate a major criticism in the design of these systems, and introduce a potential solution to the weakness. The remainder of the paper is organized as follows. The next section provides an overview of the application of expert systems to accounting, including auditing and tax. Section three examines the advantages and limitations of expert systems with special emphasis on the treatment of uncertainty. Section four discusses the use of probabilistic logic as a means to deal with uncertainty in accounting. Section five discusses fuzzy logic as an alternative to deal with uncertainty. Finally, section six provides summary and conclusions. The initial growth in expert systems can be attributed to the creation of the MYCIN program, an expert system for bacteria1 infection diagnosis. The research culminated in the production of Empty MYCIN (EMYCIN), a shell from which the MYCIN knowledge base was removed. From that point, significant progress was made in the development of expert system shells and programming languages, permitting the user to design the program to fit the specificity of the situation without the enormous program development costs [Giarratano and Riley 1989]. Accounting and auditing researchers and practitioners have shown interest in expert systems technology.
Ex Ante Recognition of Bubble Stock Gauges and Forecasts of Their Bursting in the United States and Japan
Dr. Robert H. Parks, The Lubin School of Business, Pace University, New York, NY
Section I of this article summarizes the ex ante identification of four major stock gauges as bubbles fated to burst. Section II presents the pure theory of a fully integrated stock valuation model coupling the Gordon (1962) dividend model with the Miller-Modigliani dividend irrelevancy model (1961), hereafter the G-MM model. Section III, the applied analysis, presents four matching valuation recipes I used to identify bubble stock gauges. Given its still current relevance, I devoted the core of this article in Section III (A) to the bursting of the S&P 500 in 2000. Section III (B) reviews the predicted collapse of the “megabubble” Nasdaq Composite. Section III (C) tracks, but briefly, my forecast of the bursting of the S&P 500 in 1987. Section III (D) also reviews briefly the predicted crash of the megabubble Nikkei 225 in 1990. I stressed throughout the role of fiscal and monetary policy. A principal theme is that Federal Reserve policy 1997-2000 fueled and fired an overinvestment boom that collapsed and bubble stock gauges that crashed, predictably. A related thesis is the failure of the regulatory officials and the central bank to suppress systemic financial fraud. Section IV sets forth my conclusions and suggestions for further research. The appendix lists all ex ante bubble recognition and forecasts. All forecasts there are press documented, and each bubble burst as forecasted. The footnotes include additional press documentation and further ex ante evidence of bubble gauges in published articles. Defining and identifying bubbles ex ante are inseparable tasks. In addition to the valuation analysis I constantly tracked the emerging systemic depressants. The two analyzed together put in stark relief the theoretically maximum valuations under ideal conditions versus the depressants that lay waste to valuations.(1) For immediate perspective just glance at the treacherous roller coaster ride of the major stock indexes for the S&P 500, Nasdaq Composite, and Japan’s Nikkei 225. The legacy of bursting stock gauges included damaging and dangerous recessions, except for the 1987 crash. (Bubbles can sometimes exist in otherwise healthy economies.) But Japan after 1990 and the United States after 2000 suffered recessions, widespread bankruptcies, evaporating profits, major unemployment, outright deflation in Japan, and the specter of deflation in the United States. Massive boom-to-bust cyclical swings and crashing stock gauges contradicted the prematurely celebrated “wealth effect” and “new era” views. Crowd mania crowded out efficient market theory.(2) A simple hypothetical example (Parks, 1998) may help in following easily the applied theoretical, mathematical, and empirical evidence in Section III. Assume two companies A and B with identical growth rates (g) = 8% for earnings, dividends, and capital gains; a 10% required return (k); and starting identical earnings (E0 = 200). To show maximum potential valuations and price-earnings ratios under extremely optimistic but highly unrealistic assumptions one must posit (a) that earnings are always honestly reported and forecasted and (b) that the required return always matches realized return. Assume further that the only difference in this ultra happy hypothetical example is that A distributes 50% of its total earnings and plows back the 50% retained earnings into growth, and B plows back 100% of total earnings but pays zero dividends.
The Effects of Shelf Registration on Cost of Debt
Dr. Debra Skaradzinski, Siena College, Albany, NY
Dr. Allan Graham and Dr. Pamela Stuerke, University of Rhode Island, RI
This paper examines publicly traded bonds that were issued from 1995 through 1998 to see whether shelf-registered debt is riskier than traditionally-issued debt, as measured by issuance prices. Using a sample of new industrial issues, we find that the yield of a bond issue is not significantly influenced by whether the debt was shelf- or traditionally-registered. We also find that any increased risk as a result of shelf registration is impounded in bond ratings, which do significantly influence yields. We believe this implies that the choice of registration method (shelf vs. non-shelf) is most likely an indicator of firm size and capital structure rather than an indicator of firm risk, and that potential changes in risk due to the expectation of increased leverage are recognized by market participants prior to bond issuance. This paper examines publicly traded bonds that were issued from 1995 through 1998 to see whether or not the higher risk identified in shelf-registrations of debt (Moerhle et. al., 2004) is priced at issuance. Earlier research on shelf-registered debt (Allen, et al. 1990, Fung and Rudd 1986) found that the method of issue was not significantly associated with bond price at issue, so Moehrle et al.’s subsequent result was somewhat surprising. Furthermore, Moehrle, et al.’s warning to regulators to consider reexamining the issue of shelf registration, and to investors that they should “beware of the inherent risks of shelf-registered securities,” emphasized that this result had considerable economic repercussions. Shelf registration was originally intended to allow stable, highly visible firms additional flexibility in accessing capital markets. The mechanism of shelf registration works like an option for the firms that utilize it. Firms that shelf-register a specific amount of debt or equity securities are subsequently allowed, but are not required, to issue those securities within two years of registration. The debt securities are priced according to the market rates and the firms’ debt ratings at the time of issuance, not at the time of registration, so it does not benefit a firm to register securities with the intent of purposely taking later action that would downgrade the debt. On the other hand, since this prior registration speeds up the actual issuing process, firms are more readily able to take advantage of a short-lived drop in market interest rates. This option can be considered a financial benefit to the firm. However, if debt that is issued through shelf registration is perceived by the market to be riskier than conventionally issued debt, then shelf-registered debt should have an additional cost associated with it. Firms would then have to weigh the benefit of a quick response to lower market rates against the inherently higher cost of their shelf-registered debt. In contrast, if debt issued through shelf registration is not perceived to be riskier, then managers of firms that meet the SEC’s criteria may have additional incentives to consider the use of shelf-registrations. Using a sample of new industrial issues, we find that the bond market does not price the increased risk. We believe this implies that the choice of registration method (shelf vs. non-shelf) is most likely an indicator of firm size and capital structure rather than an indicator of firm risk. The remainder of this paper is organized as follows. Section 2 includes a brief overview of literature concerning shelf registration and poses our research question. In Section 3, we describe the models employed in the analysis and describe the sample. We present the empirical results as well as an analysis and discussion of these results in Section 4. Section 5 follows with a summary and conclusion of the paper. From February of 1982 to November of 1983, the Securities and Exchange Commission (SEC) tested Rule 415, which permitted qualifying firms to register debt or equity securities up to two years in advance of issuing. After November of 1983 the SEC made Rule 415 permanent. The intent of Rule 415 is to afford large and established firms greater financial flexibility.
Twenty Years of Advances in Accounting—1984 - 2003
Dr. Jeffrey J. McMillan and Dr. Daryl M. Guffey, Clemson University, Clemson, SC
This paper provides a description and categorization of the topical content appearing and research methods employed in Advances in Accounting (AIA) since its inception in 1984. AIA was established to provide an important forum for discourse among and between academic and practicing accountants. Its goal of being an outlet for quality research across a wide spectrum of topics based on varying research methodologies has resulted in a journal that arguably has the most unique mix of articles among quality accounting journals. For twenty years Advances in Accounting (AIA) has provided researchers an outlet for disseminating new conceptual arguments, models, and proposals that enhance our existing knowledge base. AIA was established to provide an important forum for discourse among and between academic and practicing accountants on issues of significance to the future of the discipline. Emphasis was placed, and still is, on original commentary, critical analysis and creative research that would substantively advance our understanding of financial markets, behavioral phenomenon, regulatory policy, and accounting education. Over the years the trends in contributors, topical content, and research methods utilized in various research outlets has been examined (Hutchison and White 2003; Meyer and Rigsby 2001; Urbancic and Zimmerman 1994; Carnaghan et al. 1994; Mitchusson and Steinbart 1993; Heck and Bremser 1986; Dyckman and Zeff 1984). Periodic analyses of the content and methods used in an academic journal help identify the trends and the most prolific authors that have had significant influence on the evolution of accounting knowledge. It also helps one to better appreciate and understand what the journal has been able to achieve over a time period. To that end, this article presents and categorizes the topical issues addressed and research methods employed in AIA articles. In addition, an analysis of AIA’s contributors and an evaluation of AIA’s impact are provided. Since its inaugural volume in 1984, AIA has been issued once a year with one supplemental volume published in 1989. Thus, over the past 20 years there have been 295 articles published in AIA’s 20 annual and one 1989 supplemental volumes – slightly over 5,600 pages in total. The number of articles per volume range from 18 articles to 12 articles. The longest article published in AIA was 47 pages and the shortest 5 pages. The median article length was 18 pages and the mode was 14 pages. Unlike many other academic journals (e.g. Journal of the American Taxation Association (JATA), Behavioral Research in Accounting (BRIA), Auditing: A Journal of Practice and Theory (AJPT)), AIA has not emphasized one area of interest. From Volume 1 (1984) through Volume 20 (2003) AIA has been true to its stated mission of publishing articles focusing on a wide array of topics and differing research methodologies. Thus, in order to try and gain an understanding of the vast and varied makeup of AIA’s content over the past 20 years a logical and rational classification system of some nature had to be applied. Accordingly, every AIA article published in the past 20 years was reviewed and classified according to the primary topic being addressed and the primary research method utilized in conducting the research.
Trade Liberalization and Relative Performance Contracts in Import Competing Industries
Dr. Ya-Chin Wang, Department of Finance and Banking, Kun Shan University, Taiwan
To foster economic growth, multilateral agreements within the WTO/FTA framework have inspired attention on the wave of trade liberalization. Although a tariff raises revenue and may earn profits from foreign firms, it sacrifices consumer surpluses and reduces the social welfare for both countries. This paper introduces relative performance contracts into a quality-differentiated market and examines how import policy responds to this mechanism. It found that in an import competing model with managerial delegation, free trade is optimal. Therefore, tariff imposing is not only a beggar-thy-neighbor policy, but also a “lose-lose” solution. The incentive contract plays a pivotal role on replacing trade intervention. Along with the increasingly fierce competition in globalized market, many governments, both in less-developed and developed countries, protect domestic industries through adopting a variety of trade policies, and tariff and non-tariff barriers. Authorities could adopt import protection on the basis of quality of products to extract rent from foreign firms for increasing tariff revenue or domestic welfare. For example, a higher tariff rate is imposed on a luxurious car compared to a stock car. However, under the WTO (World Trade Organization) and FTA (Free Trade Agreement) between countries, import tariff or trade obstacles are not allowed. It explains why many countries alter their economic policies towards free trade by opening markets and reducing tariffs. Simply put, free tree enable foreign firms to trade as effectively and easily as domestic firms. On such a basis, this paper attempts to explore the possible motives for a government in the home country voluntarily coming to an amicable agreement with foreign firms. Brander and Spencer (1984) have shown that a tariff has a profit shift effect besides its influence on consumer surplus and tariff revenue. A great deal of research has been focused on analyzing strategic trade policies under the theoretical model of vertical product differentiation. Zhou et al. (2002) analyzes the implications of strategic trade theory for policies targeted at the quality of exports by a less-developed country (LDC) and a developed country (DC). Toshimitsu and Jinji (2007, 2008) further presented that social welfare and the appropriate trade policy dependent on the mode of market competition and the assumptions of marginal production costs. However, in viewing the separation of ownership and management that prevails in most modern enterprises, the owner seeks to maximize profit, but the delegated manager may be more concerned with performance, market share, or revenues rather than profits only. Sales (revenues) delegated research starts with Vickers (1985), Fershtman and Judd (1987) and Sklivas (1987) (henceforth VFJS). In contrast with VFJS, there is significant evidence in the real world and the academic area that shows that managers do care about relative performance [see Miller and Pazgal (2001, 2002 and 2005)].
Modeling Market Adoption of a Retail Innovation over Time and Space
Dr. Arthur W. Allaway, The University of Alabama, Tuscaloosa, Alabama
Dr. David Berkowitz, The University of Alabama in Huntsville, Huntsville, AL
Although the adoption of innovations / diffusion of innovations literature is very rich, the spatial study of adoption and diffusion has been limited - even though a large number of innovations are launched into geographically defined markets. This paper explores and models the driving forces of consumer adoption of a new loyalty card program launched in a major U.S. metropolitan area. Event history modeling is used to parameterize the effects of distance from the diffusion propagator, the firm’s marketing efforts, the competitive environment, and the role of previous adopters in influencing later adopters. The results yield new insights into the understanding of adoption and diffusion across space and time. The spatial tradition is central to the study of marketing. In retailing, in particular, but in other areas as well, most marketing decisions have to take into account their impact on the size, shape, depth, or dynamics of the existing (or potential) customer base of the firm, i.e., the market area. One approach to the study of spatial phenomena in marketing has involved the modeling of consumer spatial behavior. Building on Reilly (1931), Huff (1964), and McKay (1973), significant advances have been made in understanding how consumers make choice decisions among different competitors in a market Craig, Ghosh, and McLafferty 1984). Extensions in this area have considered the effects of image, habit, loyalty, competitive strategies, variety-seeking, and misspecification error in models of consumer spatial behavior (see, for example: Stanley and Sewell 1976, Fischer, Nijkamp, and Papageorgiou 1990, Rust and Donthu 1995, Fotheringham and Curtis 1999). A second approach to the study of spatial processes in marketing has treated the market area itself as the unit of analysis. Research by Huff and Batsell (1977), Huff and Rust (1984), Rust and Brown (1986 ), Donthu and Rust (1989), Rust (1991), Thrall and del Valle (1996) Thrall and Casey (2001), for example, has concentrated on developing methods for modeling market area boundaries and densities so that they can be compared statistically. Using these methods, business decisions that affect the sizes and shapes of those market areas can be evaluated more precisely, both over time and due to firm and/or competitor initiatives. A third, although still emerging, approach to the study of market areas is concerned with the spatial diffusion of consumer response to the introduction of new innovations. Rather than a snapshot view or an instant-equilibrium assumption with respect to market area response, spatial diffusion research concentrates on describing, hypothesizing, and/or modeling the processes by which spatially defined markets change structure and individuals adjust their spatial behavior in response to new stimuli. Although a significant body of spatial diffusion theory does exist in geography and sociology as a result of the work of Hagerstrand (1952, 1965, 1967), Morrill (1968, 1970, 1975), and others (for a review see Morrill, Gaile, and Thrall, 1988), little of it has focused on products and services being purposefully launched to consumers in spatially defined markets (for exceptions, see Brown (1968, 1981), Mahajan and Peterson (1979), Gatignon, Eliashberg, and Robertson (1989), Allaway, Mason, and Black (1991, 1994)), Redmond (1994), Baptista (2000), Dekimpe, Parker, and Sarvary (2000), Allaway, Berkowtiz, and D’Souza (2003), Smith and Song (2004), and Garber, Goldenberg, Libai, and Muller (2004). The temporal diffusion tradition in marketing is very strong. Hundreds of innovation-diffusion research studies have been published in the marketing literature over the past 40 years.
The Price-Earnings Relation: The Case of Arthur Andersen
Dr. Kirk L. Philipich, University of Michigan, Dearborn, Dearborn, MI
The purpose of this study is to examine the impact of auditor reputation on the price-earnings relation. More specifically, this study examines annual earnings response coefficients (ERC) before (2001 year-ends) and after (2002 year ends) the September 11th through the primary Enron events (the shredding of audit documents on January 10, 2002, that substantially impaired Arthur Andersen’s reputation). Two possibilities exist. First, the market may perceive that Andersen’s shredding of audit documents, and its subsequent loss of reputation, may have caused auditors, in general, to take greater care, thus increasing the reliability of the reported earnings numbers. This would manifest itself by stronger reactions to reported earnings. Alternatively, the market’s confidence in the reliability of reported earnings may have eroded leading to smaller responses to reported earnings. In addition, any resulting change in the market’s response to reported earnings might be even more pronounced for Andersen clients. Statistical tests reveal a significant decrease in ERCs in the post-period. However, upon further examination, it is found that the clients of Deloitte and Touche and PriceWaterhouseCoopers exhibited large declines in the market’s response to their earnings releases. The clients of KPMG also saw declines, but not nearly as dramatic as those of Deloitte and Touche and PriceWaterhouseCoopers. However, the clients of Arthur Andersen and Ernst & Young saw their ERCs rise substantially. This study investigates the impact of a decline in auditor reputation on the price-earnings relation. The market response, ERC, to annual earnings announcements for the clients of the Big 5 auditors (Arthur Andersen, Deloitte and Touche, PriceWaterhouseCoopers, and Ernst & Young) is examined. The ERC for the annual earnings announced one year prior to the Enron events is contrasted with the ERC for the annual earnings announced immediately following the Enron events. The purpose of this examination is to determine if the market became more skeptical of reported earnings or believed that auditors, and their client firms, would take greater care in determining their reported earnings figures. Thus, the market response to the earnings announced by clients of the Big 5 auditors is examined. During 2001, Arthur Andersen was fined and/or paid over $100 million to settle lawsuits for audit problems concerning two clients, Waste Management and Sunbeam. However the worst was yet to come. On November 8, 2001, a third company, Enron, announced that the company and its auditor, Arthur Andersen, determined that certain off-balance sheet variable interest entities (primarily a special purpose entity named Chewco) should have been consolidated in accordance with generally accepted accounting principles (GAAP). As a result, Enron stated that all earnings from 1997 through 2000 should not be relied upon and that restated earnings would lead to reductions in reported earnings by amounts ranging from a low of $96 million in 1997 to a high of $250 million in 1999. Also, Enron’s debt had been understated by a low of $628 million in 2000 to a high of $711 million in 1997. On January 10, 2002, Andersen admitted to shredding a significant number of audit documents related to the Enron audit. On January 17, 2002, Enron fired Andersen and three days later the Powers report was released with more details concerning Andersen’s involvement in certain questionable transactions.
The Unintended Effects of the HIPAA Privacy Protections on Health Care Treatment Team and Patient Outcomes
Dr. Kimberly Jarrell, Dr. Jan Welker, and Dr. Donna Silsbee, SUNY Institute of Technology, Utica, NY
Dr. Francis Tucker, Syracuse University, Syracuse, NY
This study looked for unintended consequences of the Health Insurance Portability and Accountability Act of 1996 (HIPAA), legislation related to privacy of health care information. More specifically, the study examined effects of the legislation on health care teams and patient care outcomes. Findings revealed that both the quality and flow of information between team members, including patients and their families as co-producers of health care services, declined following the implementation of the HIPAA Privacy Rules (Rules). Change in information flow had significant negative effects on flow of services, patient satisfaction, team satisfaction and quality of care. Change in information quality had no significant effects on any of the outcome variables. Implications for policy and practice are discussed by the authors. Key words: health and hospital administration, teamwork, unintended consequences, public policy, co-production. The United States Congress enacted the Health Insurance Portability and Accountability Act (HIPAA) August 21, 1996 with a threefold intent: provide better access to health insurance, limit insurance fraud and abuse, and administrative simplification (New York State Office for Technology 2004). The latter intent included requirements for the standardization of information transactions, the privacy of individually identifiable health information, and the security of health information and electronic signatures. The Rules actually took effect on April 14, 2003 (U.S. Department of Health & Human Services 2005) and the privacy portion of the Rules is the subject of this research. While HIPAA was originally written to address the privacy of electronic health information, the final Rules were expanded to include health information in any form (oral, electronic, paper, other media). The provisions protect medical records and other patient information routinely exchanged among health plans, doctors, hospitals and others. Any of these “covered entities” that misuse personal health information face civil and criminal penalties up to $250,000 and 10 years in prison. Interdisciplinary teamwork is used extensively in health care (Jarrell 2003). Increasingly complex patient needs necessitate the input of more caregivers, often possessing specialized knowledge. Teams provide a more efficient, effective and adaptive mechanism for delivering care than traditional hierarchical models of health care (Coopman 2001; Heinemann et al. 1999). Consequently, the Joint Commission on Accreditation of Healthcare Organizations, the preeminent accrediting body for hospitals and health care organizations, identified interdisciplinary teams as essential in the delivery of health care services (Joint Commission 2002; Strassner 1997). According to Irving and Dickson (2004), effective provider- patient communication and the relationships it supports are necessary for quality health care delivery. Within knowledge-intensive businesses, customers and their service providers play a critical role in producing the service solution (Bettencourt et al. 2002). Likewise, health care providers have recognized the importance of including the patient, family and significant others as part of the health care team.
Corporate Governance: Theory and Practice
Dr. Malek Lashgari, CFA, University of Hartford, West Hartford, CT
Various theories and philosophies have provided the foundation for the development of alternative forms of corporate governance systems around the world. Furthermore, as economies have evolved through time it appears that corporate executives have deviated from the sole objective of maximizing shareholders’ wealth. Owners of the capital have responded to these forces for the purpose of preserving their wealth and earning a reasonable return on their invested capital. Whereas internal corporate control, external financial market forces, and institutional investors’ responses have been effective in securing shareholders’ wealth, legal protection needs to be provided for them. As a legal entity, a corporation enters into contracts to produce goods and services and it has the right to own property. Furthermore, the firm can borrow from various lenders and raise cash by issuing shares of its ownership. Shareholders would not only benefit from the earnings generated by the corporation, but by electing members of the board of directors they could indirectly oversee actions undertaken by the managers. These managers, as agents of the shareholders, are expected to perform for the best interest of the owners of the corporation. Corporate managers can add value to common stockholders without decreasing the welfare of the other corporate stakeholders. For example, borrowing a portion of the capital that is needed for financing activities of the firm, would lead to a higher return to common stockholders. This is because borrowing is generally inexpensive for the firm in the face of taxation benefits available to business enterprises. Executive decisions may result in a transfer of wealth from one group of shareholders to the other. For example, by undertaking risky investment projects, greater rewards may be available to common stockholders without any such benefits to bondholders, except for suffering from excessive risk. Corporate managers can also destroy wealth. History tells us numerous examples in which actions undertaken by corporate executives have resulted in bankruptcy of the firm. The managers of a business enterprise, however, could add value for all corporate stakeholders including owners of the capital, labor, and the society at large. This would be a case of Pareto optimality in which the welfare of some group is increased without any decrease in benefits to the others.
An Evaluation of State Income Tax Systems and their Impact on State Spending and Revenue - A Multi-State Study
Prof. Demetrios Giannaros, University of Hartford, CT
The primary objective of this study is to carry out a multi-state public finance behavioral impact comparative analysis, regarding the introduction of the income tax system in various states. The emphasis of this multi-state study is to determine whether the introduction of an income tax system, a politically controversial issue, resulted in significantly higher levels of state spending or taxation after the introduction of such revenue system. We use econometric (such as interaction variable analysis) techniques to evaluate whether a significant structural change in state public policy on spending and taxation did materialize after the introduction of the income tax in ten states. Our results do not show behavioral consistency for the ten states under study--subsequent to the introduction of the income tax. In the last few years, about forty five state governments in the USA have struggled to cover unexpected large budget deficits. The economic debate on the issue of state budget deficits revolved around the level of state government spending and taxing, the appropriate form of taxation, the impact of such forms of taxation on politicians’ behavior, and the relative stability (steady stream of revenue) of alternative tax systems. Some of the discussion and debate turned to the system of taxation and its impact on the budget, spending, taxing and the state economy. This study attempts to evaluate whether the introduction of the income tax system resulted in behavioral changes in terms of taxing and spending by the state legislatures. For this purpose, we use econometric techniques to determine if there were structural changes after the introduction of the state income tax in ten different states. The critics of the state income tax system proclaim that politicians would use the income tax as a way to expand the size of state government, that is, “tax and spend.” Proponents, on the other hand, viewed it as a fair and stable taxation system that does not necessarily result in bigger government. More specifically, some proposed that the income tax system is a hindrance while others saw it a as stable and fair tax revenue raising system. The proponents of the income tax suggested that a system of taxation based on a proportionately more progressive income tax regime -- relative to other regressive taxes -- would create a more stable state revenue stream that would avoid the excessive fluctuations in state revenue. The critics of the income tax system predicted that politicians would use the income tax as a way to expand the size of state government. The debate was so potent with some strongly believed that the income tax would significantly harm the state economy and individuals. The leading study quoted by the opposition at the time predicted that the introduction of the income tax would have both negative economic and budget implications with increased spending and taxation. Prof. Thomas Dye (1990) conducted this econometric study. I became interested in studying this issue, using econometric analysis in 2003, at the peak of the recent state budget deficits and debate.
Muse Air: Management in Crisis
Toby Pratt, Embry-Riddle Aeronautical University
Dr. Marian Schultz, The University of West Florida
Dr. James Schultz, Embry-Riddle Aeronautical University
This paper details the variables which led to the incorporation, operation and eventual demise of Muse Air. The individuals involved with Muse Air, and its inexplicable ties to Southwest Airlines, make this story unique. After its launch bad timing, poor strategy, grueling competition, and unexpected obstacles plagued the fledgling airline. After four years of operation in a post-deregulation environment with no operating profits, Muse Air is sold to Southwest Airlines and eventually, liquidated. This paper not only includes historical facts regarding the organization of the airline, but also tells a story of a company, and the affect it had on customers, employees and the management team. Lamar Muse is at the head of Southwest Airlines as the first flight takes to the skies in June 1971. Muse is the company President and CEO. Years later, Muse finds himself at the helm of another airline, but this carrier bears his own name, Muse Air (Muse, 2002). Lamar Muse is no stranger to the airline industry. His experience within commercial aviation is extensive, having worked in senior positions with Trans-Texas Airways, American Airlines, Southern Airways, Central Airlines and Universal Airlines (Leary, 1992). During his 12 years at Trans-Texas Airways (TTA) as Secretary-Treasurer and Chief Financial Officer, Muse helps to transform the carrier into one of the most profitable in the industry. TTA becomes only one of two carriers to make a profit, and his success at TTA attracts the attention of American Airlines, where he is recruited as Assistant Vice-President for Corporate Planning (Leary, 2002). At American Airlines (AA), one of Muse’s tasks includes the recommendation for a new airplane to replace AA’s aging Convair fleet. But despite his recommendation that the airline purchase the Boeing 737, AA selects the BAC One-Eleven, a decision the airline would later regret due to performance and maintenance problems. Additionally, when Eastern Airlines begins a competitive shuttle service out airports in Boston, New York and Washington, D.C., Muse recommends providing a free flight for their customers if they can not provide them transportation on their next flight within 30 minutes. But AA selects a different strategy, one that causes them to lose market share in the shuttle market. After the two situations at AA where his advice was not followed, Muse accepts a position in Atlanta where he becomes Vice President for Finance at Southern Airways. Under Muse’s guidance, Southern soon becomes the most profitable carrier in the airline industry (Leary, 1992). After three years of success at Southern, Muse accepts the position of President for Central Airlines. But after two years, and the merger of competitors Southern and Frontier Airlines, Muse moves to Detroit to become President, Chief Executive Officer, and part owner of Universal Airlines (Leary, 1992). Universal is in major financial trouble when Muse arrives, but by developing a unique freight rate, and introducing to the airline industry “hubbing” the practice of bringing flights into a single airport, a “hub”, and sending them back out to their ultimate destinations along “spokes”. These improvements resulted in $2.8 million profit for the carrier after Muse had only been the job for only a year. The following year the profit increased to $4.8 million. But when the company’s majority owners want to begin using the Boeing 747, Muse disagrees, believing that the aircraft is not right for their market. Lamar is fired over this disagreement, and the remainder of his contract, along with his share of the company is settled for cash, making extremely him rich, but unemployed at the age of 49 (Leary, 2002). Two years later Muse is hired by Southwest Airlines as Chief Executive Officer by Rolland King and Herb Kelleher. Muse is tasked with launching of Southwest. Their initial market was providing service to Dallas, Houston and San Antonio with three Boeing 737’s (Muse, 2002)
Consumer Loyalty – A Synthesis, Conceptual Framework, and Research Propositions
Dr. Lance Gentry and Dr. Morris Kalliny, Missouri University of Science and Technology, Rolla, MO
Numerous conceptual and empirical studies utilize the loyalty construct as a core part of their theoretical work. These studies purport to explain if and why loyal consumers are more profitable for firms, mental models of satisfaction and loyalty, and guidelines for marketing strategies. However an objective view of the literature shows little progress in approximately eighty years of research. In this article, the authors propose a conceptual definition of consumer loyalty, synthesize and discuss the probable factors of loyalty within a framework that is useful to scholars and practitioners. In 1923, Copeland wrote an article describing the theoretical relationship between brands and consumers’ buying habits. Albeit with different terminology, he described a continuum of consumer loyalty that incorporated both behavior and attitude. Throughout the next eight decades researchers have argued for measurements of loyalty that were strictly behaviorally based (e.g., Burford, Enis, and Paul, 1971; Cunningham, 1956; Passingham, 1998; Olsen, 2002; Tucker, 1964) or strictly attitudinal based (e.g., Bennett and Kassarijian, 1972; Guest, 1942; Jain, Pinson, and Malhotra, 1987; Perry, 1969). Many others have echoed Copeland’s original thought and argued for a two-dimensional construct with both behavioral and attitudinal components (e.g., Backman, 1991; Chaudhuri & Holbrook, 2001; Day, 1969; Gahwiler and Havitz, 1988; Newman and Werbel, 1973; Oliver, 1999; Pritchard, Howard, and Havitz, 1992). Tucker (1964) strongly advocated using a purely behavioral measure of loyalty, not because he dismissed the importance of attitudes, but because he predicted scholarly “chaos” would ensue if attitudes were included in the operationizations of loyalty. In Jacoby and Chestnut's (1978) extensive review of the brand loyalty literature, they found that most, if not all, of it suffered from extensive problems and that the results would probably not stand up to rigorous empirical analysis. "Despite more than 300 published studies, BL [Brand Loyalty] research is kept afloat more because of promise than results." They bemoaned the lack of an established conceptual base for operationalizations, which resulted in inconsistent and ambiguous measurements and definitions along with problems with arbitrary cutoff criteria. In addition, Jacoby and Chestnut criticized researchers for their simplistic perspectives on loyalty (e.g., failing to consider multibrand loyalty, ignoring the larger perspective of loyalty and disloyalty, concentrating on static behavioral outcomes vs. dynamic causative factors) as well as noting many basic methodological errors (e.g., using inappropriate or undefined units of measurement, or confounding relationships with other measures of loyalty). Pritchard, Havitz, and Howard (1999) referenced acknowledgements dating back to 1971 that the literature has focused on measuring loyalty, but fails to answer the question “Why are consumers loyal?” and they concluded that this predicament is still true. Our review of the current research indicates that the situation has not changed (see Choi, Kim, Kim and Kim 2006; Chandrashekaran, Rotte, Tax and Grewal 2007; Plamatier, Scheer and Steenkamp 2007). Researchers have determined that little truly known about loyalty and called for investigation into the fundamental meaning of loyalty (Oliver, 1999; Chandrashekaran, et al 2007), determining the long-term consequences of loyalty (Iwaskaki and Havitz, 1998), and investigating additional loyalty antecedents (Pritchard et al, 1999). The majority of the existing literature on loyalty may be loosely classified as either consumer research or leisure research, with these terms incorporating all the contributing specialties (e.g., marketing, psychology, sociology, etc.).
A Quantitative Review of Organizational Outcomes Related to Electronic Performance Monitoring
Dr. D. Scott Kiker and Dr. Mary Kiker, Auburn University Montgomery, AL
We employ meta-analysis to assess the relationships between electronic performance monitoring (EPM) and subordinate job performance, stress, and job satisfaction. We found that EPM has a positive effect on performance quantity but a negative effect on performance quality. However, the EPM-performance quality relationship was moderated by task difficulty such that EPM improves performance quality when the task is simple, but detracts from it when the task is complex. EPM was also shown to be negatively correlated with employee job satisfaction and positively associated with job stress. Implications of these findings, as well as recommendations for future research are discussed. The widespread proliferation of computer technology has significantly altered many traditional business practices. In the context of monitoring employee performance, the availability of these technologies has the potential to more accurately quantify indicators of employee performance without the inherent shortcomings of human processing. In addition, there may be significant cost savings associated with assessing employee performance electronically. Electronic performance monitoring (EPM) systems use electronic technologies to collect, store, analyze, or report the actions or performance of individuals on the job (Nebeker & Tatum, 1993). A 2001 survey of U.S. corporations attests to the pervasiveness of EPM systems in today’s organizations. Specifically, the American Management Association reported that 78% of mid- to large-sized companies in the U.S. were conducting electronic monitoring activities. This represents a more than two-fold increase in organizational EPM activity since 1997 (AMA, 2001). Companies’ use of electronic performance monitoring techniques has sparked much controversy over the past three decades and it continues unabated to the present time. Much of the controversy surrounds the impact of EPM systems on important organizational outcomes like employee productivity, performance quality, job satisfaction, and job stress. A review of the extant literature on the effects of EPM on each of these important indicators of organizational effectiveness, as well as our hypotheses regarding EPM’s effects on these variables, is presented below. Proponents of EPM systems often cite a number of potential advantages of electronic monitoring including increased employee productivity, a fairer assessment of employee performance, and the ability to provide timely and accurate feedback, which should lead to better employee performance (Aiello, 1993; Alder & Ambrose, 2005). At least two motivational theories support the notion that at least under certain conditions, EPM would enhance employee performance (Stanton, 2000). First, research into social facilitation suggests that EPM might impact positively employee performance (Stanton, 2000; Griffith, 1993). More specifically, Zajonic (1965) and others (Cottrell, 1972; Cohen & Davis, 1973; Cohen, 1979; 1980) argue that the presence of an observer, or audience, causes the actor to experience a heightened emotional arousal.
Intertemporal Linkages Between Hong Kong and Shanghai Stock Markets Surrounding the Handover of Hong Kong
Dr. Joseph French, University of Northern Colorado, CO
The linkages between the stock markets of Hong Kong and Shanghai are examined in this paper for the period before, during and after the 1997 handover of Hong Kong. Return relationships of the two markets are shown to have changed after the handover. Variance decomposition and Granger Causality indicate an increasingly important role of the Shanghai stock market relative to that of the Hong Kong stock market. The two markets are shown to be cointegrated and results indicate that this cointegration has increased after the handover. The existence of linkages across different national stock markets has important implications for investors who are seeking diversification opportunities internationally. When linkages suggest co-movement between different markets, any one market would be representative of the behavior of the group of markets. This would effectively reduce the scope for portfolio diversification possibilities. This implication has increased interest in the topic of market linkages and led many researchers to investigate whether different markets are interrelated. This paper looks at the intemporal linkages between the Shanghai and Hong Kong stock markets for the periods before, during and after the handover of Hong Kong. The linkages across markets that will be examined include contemporaneous co-movements, causal relationships, responses to cross-market shocks, and long-run interdependence. The handover of Hong Kong to China was a historic event that has real economic implications for the countries of the Asia-Pacific Rim. That is why this event in history is not just symbolic, not is it a question solely of political ownership. The stated objective of the Chinese government is to develop Shanghai into a leading financial center by the year 2010 (Asian pacific report). In the three years after the handover of Hong Kong to China, Hong Kong experienced continuing deflation and economic slowdown. Hong Kong’s sluggish economy rebounded in 2002 relative to a year earlier. The U.S-Iraq War and the SARS epidemic in early 2002, however, have apparently affected Hong Kong’s economy for this reason the period after 2002 was not considered in this paper. In his 1997 policy address, Chief Executive Tung Chee-Hwa emphasized that Hong Kong will increase economic cooperation with the Mainland and facilitate economic integration. Toward this end, Hong Kong has actively worked on a Closer Economic Partnership Arrangement (CEPA) with the Mainland. Much of the research using the methodology this paper applies has used data surrounding the Asian Financial Crisis (Maroney, Naka and Wansi 2004). Moon (2001) investigated the impact of the 1997 Asian financial crisis on stock market integration in East Asia and found that in the long and short run, East Asian stock markets have become increasingly integrated with the US market after the Asian Financial Crisis. This Moon said confirmed the view that the Asian crisis brought about US dominance over Asian stock markets and should increase the linkages between the two major national stock market indices Sheng and Tu (2002) examined the changing patterns of linkages among the stock markets of 12 Asian counties for the periods before and during the Asian financial crisis.
Microfinance: Effects of Contingent Incentive Programs on the Performance & Productivity of Loan Officers
Dr. Jamaluddin Husain, Purdue University Calumet
Dr. Jay Jiwani, Roosevelt University Chicago
Global interest in alleviating poverty and assisting the poor through microfinance institutions (MFI) continues to grow. Since the operation of microfinance services is essentially labor extensive, the MFI industry is continuing to strive to identify productivity improvement strategies for achieving cost-reduction. Incentives are the best tools to motivate MFI employees and staff to align with their organizations’ objectives. The purpose of the study is to provide an overview of the incentive strategies available for and used by MFIs and its impact on their outcomes, with a focus on the financial motivation of extrinsic incentives to stimulate the productivity and performance of loan officers in MFIs. The Central Intelligence Agency fact book (2007) reports there are over 6.5 billion people on this planet (www.cia.gov) of which approximately 1.2 billion people live on less than one dollar a day, and 2.8 billion people live on less than two dollars a day (Holvoet, 2004; Stiglitz, 2002). The seminal work of Stiglitz (2002), the winner of the 2001 Nobel Prize in economics, notes that the greatest challenge to address is the growing problem of world poverty. According to a Harvard Business School case study by Prahalad and Hammond (2002), the vast majority of the world’s population is at the bottom of the world pyramid. More than 62% (4 billion people out of 6.5 billion) of the world population is thus classified as poor, and thus the potential of microfinance institutions (MFIs) is enormous. Already there are over 10,000 MFIs internationally and the number is growing. However, experts agree that less than 200 MFIs worldwide are self-sustaining while the remaining survive on donations, grants and subsidies (as cited in Koveos, 2004). According to Harris (2002), even though the problem of poverty has been precisely articulated and defined, effective ways of alleviating poverty are widely debated. MFIs can be powerful. Koveos (2004) notes that MFIs, as vehicles of reducing poverty in developing countries, have gained significant support from the stakeholders (donors, society, government, volunteers, staff, and borrowers) during the last 20 years. However, the challenge is to identify the most effective ways to use microfinance products and services to alleviate poverty world wide (Holvoet, 2004). The debate regarding sustainability of MFIs involves two camps. The institutionalists believe that MFIs should not be dependent on subsidies, but that they should be self-sustained, while welfarists believe that the main objective of MFIs should be to alleviate poverty, more specifically, reaching the depth of outreach by serving the very poor. Therefore, welfarists believe in the continued subsidy & grants approach for MFIs. Literature indicates that historically repayment rates for the very poor is considerably low, which negatively impacts sustainability. Brau & Wooler, 2004, as well as Morduch, 1999, believe that Pay-For-Performance (PFP) incentives may be designed to address the two views and strengthen the sustainability of MFIs. Organizations and businesses have used staff incentives for more than a century. Large companies, such as International Business Machines and Procter and Gamble, first introduced profit-sharing plans in 1887 to increase productivity and organizational commitment. Taylor (1911) conducted a study on the impact of piece rate in 1911. According to Pfeffer & Sutton (2006), incentives are best used to motivate members of the organizations to align their work with organizations’ objectives. The authors further articulate a deeply held assumption of the PFP theory as follows: Ask a researcher, a practitioner, a manager, a worker, a student, a young child that if giving financial rewards for performance will lead to higher performance. The answer will be at least a qualified yes.
A Geopolitical Issue: Energy at a Turning Point
Dr. Flory Anette Dieck-Assad, Instituto Tecnológico y de Estudios Superiores de Monterrey, Monterrey, N.L., Mexico
Petróleos Mexicanos (PEMEX), a Mexican state-owned company, is the only authorized by law to produce oil and gas in Mexico. PEMEX can neither issue equity nor borrow money by selling bonds; however, it finances one-third of the Federal Government expenses, leaving scarce money for drilling activities and, thus, restricting its ability to develop new reserves. PEMEX requires huge flows of investment in order to avoid its financial bankruptcy and secure the energy supply for Mexico’s sustainable development. The objective of the case is to place the student in the debate about sustainable development that encompasses political, economic, financial, and ethical decisions, in a geopolitical changing scenario where the “global warming” issue is presenting a new challenge for doing business in the future. A detailed Teaching Note is available from the author. In the foggy morning of April 7, 2006, Elba Esther Gordillo Morales, national president of the Education Syndicate—Sindicato Nacional de Trabajadores de la Educación (SNTE)—was staring through the window. Suddenly, she decided to call for a reunion in her office with Expert Consultants, Inc. (EC, Inc.). She needed their evaluation to know if what she did in 2003 to support the energy reform was the right decision. As a member of the Mexican Congress, she had supported the energy reform even though the rest of the members of her political party, the PRI, decided not to support it. As a result, she was expelled from her political party. She worried about the repercussions that her decisions had caused. When she read the statement of José Ramón Ardavín, undersecretary of Natural Resources and Environment—Secretaría del Medio Ambiente y Recursos Naturales (Semarnat)—on April 1, 2006, she was reminded of the effects her decision would have on Mexico’s energy future. She was sad, disturbed, and uneasy by the ill-advised decisions of law-makers that had allowed the Federal Government to take advantage of Petróleos Mexicanos (PEMEX), the Mexican petroleum state-owned company, for more than twenty years. “They have been milking PEMEX by taking its profits and starving this ‘cash cow’ to death through insufficient investment in technology and exploration,” she thought, “the feared ghosts of the past are a reality now in a new geopolitical scenario where the threat of global warming is endangering PEMEX’s role in Mexico’s future development. What will happen if the Federal Government continues to bleed PEMEX to make up for insufficient tax revenues? How long will PEMEX be able to survive without sufficient funds for development and exploration? Will national energy independence be lost to the private sector? Will PEMEX be alive and financially able ten years from now to provide the energy for Mexico’s sustainable development? Am I on the right track?” Elba Esther felt that she needed to re-evaluate the circumstances that triggered her decision to support the energy reform. She hoped that EC, Inc. could give her an objective analysis of the present situation including the role of the Secretary of Energy (SENER), and others, involved in the energy and fiscal reforms.
Large Firms & Small Firms: Job Quality, Innovation and Economic Development
Dr. Richard Judd and Dr. Ronald D. McNeil, University of Illinois at Springfield, IL
Economic development strategies and methods must change. Why? Competition for new plants or companies to locate in communities no longer comes from other communities, counties or states. Competition for plants and companies has become global in today’s flat economic landscape. Globalization of services and production along with markets for goods, capital, services and currencies impacts decision-making for all companies. However, within the United States, most federal programs for economic development are written for the economy of the 20th century, not that of the 21st century. In order to successfully compete in the global environment, some experts are abandoning traditional approaches to economic development. Rather than relying solely on recruiting large firms with tax breaks, financial incentives and other inducements, more progressive economic development experts are beginning to extend efforts to support the growth of existing enterprises and to promote the practice of building businesses from the ground up. The 21st Century Economic Development Model has three complementary features which were not part of the 20th century approach to economic development. The three features of the 21st Century Economic Development Model are: (1) development and support of entrepreneurs and small businesses; (2) expansion and improvement of the infrastructure; and (3) development or recruitment of a skilled and educated workforce. These new approaches are founded upon improved education from kindergarten through higher education; infrastructure development by the community, region, state, and country; creation and maintenance of an attractive business climate; and improvement in the quality of life within a community. The over-riding reason for the change in approach to economic development is clear: experience demonstrates that economic development strategies for attracting large firms are unlikely to be fruitful and, even if successful, may come at a great cost. The new “vision” is to support the innovative prowess of entrepreneurs and small businesses so that these developing ventures can produce new jobs for the community. Historically, entrepreneurs began small companies with one or two employees; however, when successful, these tiny companies grew into Ford Motor Company, Boeing Aircraft, Hewlett Packard and the like. Over time, this strategy changed into one of attempting to attract subsidiaries or plants of large firms to locate in a community but this is the strategy that is no longer working. What is occurring is a return to and refinement of the approach of the late 19th and early 20th century in the United States. The overarching question for today’s economic development experts is: Are they willing to return to a strategy that once allowed small businesses to flourish and some to become large employers? This paper addresses this issue and provides evidence in support of this 21st Century Economic Development Model. The paper will also offer recommendations for further research on the 21st Century Model, discuss whether or not public engagement in economic development itself is cost-effective, and demonstrate that economic development is an effective socio-economic pursuit. On the surface, the direct economic effects on a local economy from a large firm entering a community appear as significant gains in employment and personal income.
The 3D Transformational Leadership Model
Dr. Eli Konorti, P. Eng., University of British Columbia, Canada
One of the most interesting topics of all times is leadership. Bass (1990) stated, “The study of history has been the study of leaders–what they did and why they did it” (p. 3). The first studies of leadership centered on theory. Researchers and scholars sought to identify leaders’ styles and compare them to the demands or conditions of society. In later years, as leadership became a topic of empirical study, researchers, academics, and scholars alike attempted to understand and define leadership. Definitions such as process, power, initiation of structure, influence, and others began to emerge. Bass (1990) postulated that scholars and researchers have debated and deliberated the definition of leadership for many years. Bass wrote that there are as many definitions of leadership as there are people attempting to define leadership. However, as one looks at the evolution of the leadership field, a trend emerges. The earlier definitions identified leadership as a movement and one that consisted of individual traits and physical characteristics (Bass, 1990). In later years, scholars used the term inducing compliance to describe the role of the leader. More recently, the view of leadership has become one of influencing relationships, initiating structure, and achieving goals (Friedman & Langbert, 2000). Starting in the early 1930s, theorists used pictorial models to explain their theories. The first few theories on leadership centered on types of leadership such as autocratic, democratic, and laissez-faire (Wren, 1990). Theorists later expanded the field of leadership to include human attributes such as ability and intellect. The leadership continuum started with the study of traits and proceeded to behavioral, situational, and eventually, contingency theories. Leadership models shifted their focus to leader traits and personality. For example, Wren (1990) wrote, “Charisma returned to leadership theory” (p. 386). These leadership models ranged from simple to very complex. Yet a close examination of these models and the leadership domain as a whole suggests converging definitions of leadership that subsequently led to a paradigm that was referred to as transformational leadership. Notwithstanding the transformational models that currently exist, there seems to be an inherent void in these models concerning a few traits and characteristics of transformational leaders that could be addressed with a new and innovative model. The purpose of this paper is to draw on peer-reviewed literature and emerging trends in transformational leadership with the intention of developing a new leadership model that looks at three leadership traits: courage, wisdom, and vision. The paper will discuss and attempt to reconcile the three traits and shed light on the relevance of these traits vis-à-vis transformational leadership. These three traits are incorporated into a three-dimensional model, resulting in a new transformational leadership model coined A 3D Transformational Leadership Model. The paper is organized as follows. First, I provide a literature review of historical and current thinking about transformational leadership. Second, I discuss the method and process I used to develop the new model. The following section discusses the conceptual framework and provides a definition of the three transformational leadership traits used in the model. Then, I report the data I collected and I discuss the first phase of the development of the model.
An Analysis of Perceptions of Managers in Manufacturing Operations of Personal Engagement in Pre-Event Natural Disaster Planning
Dr. James L. Morrison, University of Delaware, Newark, DE
Dr. G. Titi Oladunjouye, Albany State University, Albany, GA
The findings of this study suggest that managers in the manufacturing sector appear to be bystanders in natural disaster preparedness planning. While they feel fairly confident themselves about being able to contend with a natural disaster, they are generally not actively engaged in planning process. Ironically, even though they exhibit a self-confidence in their individual ability to take care of themselves if a natural disaster struck, they are not satisfied with the thoroughness of their current natural disaster pre-event planning process. Natural disasters pose challenges for leaders, employees, customers, and suppliers, among others, in both the short-term and long-term. In the face of great uncertainty, there is generally little time in which to respond, and employee decisions on how to proceed during a natural disaster can become life/death issues. Therefore, the degree to which an organization undergoes planning in anticipation of enduring a natural disaster is likely to affect the success or failure of outcomes. Being involved in the planning process is critical since policies and practices in place will likely have limited effect if individuals are unaware of them or have little confidence in their effect. Personal involvement in pre-event planning for eliminating situations or conditions that interfere with an individual’s capacity to survive a natural disaster may be critical in generating practices that are meaningful and purposeful to the employees themselves. Perceiving a natural disaster as a personal opportunity to get involved is also an act of engaging in a learning process. By creating a mindset that participation is important, managers will likely become more responsible for the success of their organization’s preparedness. Individual capacity to enhance personal protection from a sudden onslaught of disaster events is made possible through working with others for generating practices that target specific needs. Therefore, making the transition from simply being aware to actively participating in a cause may enable employees to better respond in threatening circumstances. For this research, a natural disaster is a sudden calamitous event that is the result of atmospheric and other geological imbalances that threatens the viability of the organization and is characterized by creating chaos, disruption of operations, confusion, and even death of employees. An intriguing question that is addressed here is how personally involved are managers in actually preparing both themselves and their organizations to react to a natural disaster, if one suddenly occurs. The target for data collection is employees in mid-management positions since they are most impacted upon if a natural disaster should strike a facility. In this study, a basic assumption is that personal involvement may likely to be an integral component for developing a strategy to overcome complacency and a status quo framework that can easily undermine attempts to organize individuals into an effective natural disaster response team.
The Effect of Extending the Trading Hours on Volume and Volatility: The Case of Euronext Paris and Deutsche Boerse
Dr. Deniz Ozenbas, Montclair State University, NJ
There is interest in both academic literature and the finance industry about how extending the trading hours in stock markets affect the trading volume and volatility in that market. This study compares the extension of trading hours in Euronext Paris and Deutsche Boerse that took place within a few months of each other. We show that Euronext Paris was more successful in implementing the rule change in terms of the trading volume and volatility patterns compared to Deutsche Boerse . How extending the trading hours in stock markets affect the trading volume and volatility in that market is an interesting question for both finance practitioners and academics. In this study we compare the extension of trading hours in Euronext Paris and Deutsche Boerse that took place within a few months of each other. We investigate the trading volume and intra-day volatility patterns in these markets and show that Euronext Paris was more successful in implementing the rule change in terms of trading volume and volatility patterns compared to Deutsche Boerse. For both Euronext Paris (Paris Bourse) and the Deutsche Boerse we study the transaction records, during the year 2000, of stocks that make up a major index. We use the BDM database of the Paris Bourse for the transactions of the stocks that make up the CAC 40 index. The transactions database of the stocks that make up the DAX 30 index was obtained from the Deutsche Boerse. Trading hours at the Paris Bourse at the beginning of 2000 were 9:00 am to 5:00 pm. The hours were extended on April 1st to 9:00 am to 5:30 pm. The market opens with a call auction within the first minutes after 9:00 am, and there is another call auction that takes place about 5 minutes after the close. We divide the study period into two intervals: the first interval is from January 1st to March 31st and the second period is from April 2nd to December 31st. The first interval corresponds to the shorter trading hours and the second interval corresponds to the longer trading hours. Trading hours for Deutsche Boerse at the beginning of the year were 9:00 am to 5:30 pm, but they were extended to 9:00 am to 8:00 pm on June 2nd, 2000. Similar to the Paris Bourse, trading opens with an opening call auction that takes place within the first few minutes after 9:00 am. There is an intraday call auction a few minutes after 1:00 pm. Also, again similar to the Paris Bourse, there is a call auction that takes place about 5-10 minutes after the close. Before the extension of trading hours, the closing call was around 5:40 pm. After the extended trading hours, this call is kept as the second intraday call auction and the closing call auction takes place few minutes after 8:00 pm. Again, we divide the study period into two intervals. The first interval is from January 1st to May 31st, corresponding to the shorter trading hours, and the second interval is from June 3rd to December 31st, corresponding to the longer trading hours. We include in our study all the stocks that were a part of the DAX 30 and CAC 40 indices as of December 31, 2000, and those for which we have uninterrupted data for the whole year. For both markets we eliminated the stocks that traded (over the full span of trading days) in less than 90% of all half hour intervals. As a result, this methodology gives us a sample of 28 stocks for the Deutsche Boerse, and 39 stocks for the Paris Bourse. A common error filter is to check for returns that appear too large for a given interval. The data were checked for half hour returns that are more than plus or minus 15%, however we find that the data were free of these extreme returns. Six days that had a market-wide trading halt in Deutsche Boerse were eliminated. Finally, the data were filtered for any observations with missing volume information.
A Review of Employee Motivation Theories and their Implications for Employee Retention within Organizations
Dr. Sunil Ramlall, University of St. Thomas, Minneapolis, MN
The article provides a synthesis of employee motivation theories and offers an explanation of how employee motivation affects employee retention and other behaviors within organizations. In addition to explaining why it is important to retain critical employees, the author described the relevant motivation theories and explained the implications of employee motivation theories on developing and implementing employee retention practices. The final segment of the paper provides an illustration with explanation on how effective employee retention practices can be explained through motivation theories and how these efforts serve as a strategy to increasing organizational performance. In today’s highly competitive labor market, there is extensive evidence that organizations regardless of size, technological advances, market focus and other factors are facing retention challenges. Prior to the September 11 terrorist attacks, a report by the Bureau of National Affairs (1998) showed that turnover rates were soaring to their highest levels over the last decade at 1.3 % per month. There are indeed many employee retention practices within organizations, but they are seldom developed from sound theories. Swanson (2001) emphasized that theory is required to be both scholarly in itself and validated in practice, and can be the basis of significant advances. Given the large investments in employee retention efforts within organizations, it is rational to identify, analyze and critique the motivation theories underlying employee retention in organizations. Low unemployment levels can force many organizations to re-examine employee retention strategies as part of their efforts to maintain and increase their competitiveness but rarely develop these strategies from existing theories. The author therefore described the importance of retaining critical employees and explained how employee retention practices can be more effective by identifying, analyzing, and critiquing employee motivation theories and showing the relationship between employee motivation and employee retention. Furthermore, Hale (1998) stated that 86% of employers were experiencing difficulty attracting new employees and 58% of organizations claim that they are experiencing difficulty retaining their employees. Even when unemployment is high, organizations are particularly concerned about retaining their best employees. The article provides a synthesis of employee motivation theories and offers an explanation of how employee motivation affects employee retention within organizations. In addition to explaining why it is important to retain critical employees, the author described the relevant motivation theories and explained the implications of employee motivation theories on developing and implementing employee retention practices. The final segment of the paper provides an illustration with explanation on how effective employee retention practices can be explained through motivation theories and how these strategies serve as a strategy to increasing organizational performance. In today’s business environment, the future belongs to those managers who can best manage change.
Determinants of Consumer Trust of Virtual Word-of-Mouth: An Observation Study from a Retail Website
Dr. Shahana Sen, Fairleigh Dickinson University, Teaneck, NJ
Research in communication has found that audiences establish a speaker’s credibility by his or her reputation, experiences and knowledge, as well as how much he or she can be trusted in a given situation. Extending this research, consumer psychologists have found that the persuasive power of person-to-person word-of-mouth communication is higher than marketer-generated communication, such as advertising and promotion. In this paper, we study consumers’ trust and consequently their perceptions of the helpfulness of virtual word-of-mouth, in the form of consumer reviews on the Web, that consumers have been increasingly relying upon, and test our propositions using observation data from an e-retail Website. Enabled by new information technologies, today’s consumers have real-time access to information, insight and analysis, giving them an unprecedented arsenal to help make purchase decisions (Delloitte, 2007). According to the Delloitte study, to build their knowledge arsenals, consumers are turning to virtual word-of-mouth (or e-WOM) in the form of online consumer reviews in large numbers, and these reviews are having a considerable impact on their purchase decisions. According to the Deloitte Consumer Products Group survey, almost two-thirds (62 percent) of consumers read consumer-written product reviews on the Internet. Of these, more than eight in 10 (82 percent) say their purchase decisions have been directly influenced by the reviews, either influencing them to buy a different product than the one they had originally been thinking about purchasing, or confirming the original purchase intention. The impact of word-of-mouth (WOM) on consumer decision-making has long been established by consumer psychologists (Brown and Reingen 1987; Feldman and Spencer 1965; Herr, Kardes and Kim 1991; among others). WOM information has been described as the most powerful form of marketing communication, and studies have shown that users find WOM more believable than commercially generated information (Hutton and Mulhern 2002). However, while e-WOM has some characteristics in common with traditional WOM, it is distinctive in that it shares other characteristics with marketer-generated communications, such as advertising, and additionally has unique ones of its own. For example, a shared characteristic with traditional WOM is that e-WOM is also communicated by consumers and not by the marketers of the product, making it more believable to the reader. On the other hand, as with traditional WOM, the audience establishes the speaker’s credibility by inferring his or her reputation, experiences and knowledge, as well as how much he or she can be trusted in a given situation. In the case of e-WOM, however, the reader is not familiar with the credentials of the reviewer and has to infer this by the cues that are present within the review and associated with its environment (e.g., the credibility of the Website may be one important surrogate). Besides that, quite often the review is featured on the marketer’s Website such as in the case of Amazon.com or BarnesandNoble.com, rather than an independent third party, such as epinions.com, consumerREVIEW.com or dooyoo.com.
The Hewlett Packard – Compaq Computers Merger: Insight from the Resource-Based View and the Dynamic Capabilities Perspective
Preeta Roy, The Wharton School, University of Pennsylvania, Philadelphia, PA
Probir Roy, University of Missouri-Kansas City, Kansas City, MO
In this paper, we investigate the ongoing challenges faced by consolidation in the technology industry. We focus on two different paradigms to explore value creation in acquisition events: the resource-based view (RBV) and the dynamic capabilities perspective. We utilize the RBV perspective and Dynamic capability perspective to analyze the potential of technology mergers by focusing specifically on the merger of HP and Compaq. The HP-Compaq merger presents an interesting case in which these two paradigms can be used to gain insight on potential outcomes. We begin with an overview of relevant literature. We then analyze HP and Compaq in terms of resource mix and the combined synergies that might arise from related resources. This is followed by an analysis of each company’s acquisition experience to determine if there exists the (dynamic) capability to integrate. Mergers and acquisitions have been, and continue to be, a topic of great interest to researchers trying to understand the factors explaining why some firms perform better in managing the acquisition process than others. In managerial practice as well as in academic writings, the management of the post-acquisition integration phase is established as the single most important determinant of shareholder’s value creation (or value destruction) in the acquisition process (Zollo, 2001). As Zollo and Singh (2001) find, the type of acquisition (horizontal or market extension) is an important variable to understanding performance implications. In horizontal acquisitions, there exists a higher potential for efficiency-driven costs reductions. This position pertains to the resource-based view of the firm and the impact of resource (and market) relatedness between the two firms. On the other hand, such acquisitions require a more complex integration process. There are a greater number of potential overlaps of resources and activities across the organizations and the consequently large array of simultaneous, independent decisions and action steps necessary to accomplish this integration. In such a case, the set of post-acquisition decisions about the manipulation of resources, the (dynamic) capability to do so, and the match between these two factors seem to matter most (Zollo and Singh, 2001). In deepening our understanding of how resources are applied and combined in obtaining strategic advantage, the RBV framework is utilized. The RBV is a model that is built on the notion that firms are heterogeneous in their resources (Teece, Pisano and Shuen, 1997; Barney, 1991; Wernerfelt, 1984). Resources include all assets, capabilities, organizational processes, firm attributes, information, and knowledge controlled by a firm that enable the firm to conceive of and implement strategies that improve its efficiency and effectiveness (Barney, 1991). Further, resource endowments are sticky, firms are to some degree stuck with what they have and may have to live with what they lack (Teece, Pisano and Shuen, 1997). As Teece, Pisano, and Shuen (1997) state, resources are sticky because 1) business development is an extremely complex process; 2) some assets are not readily tradeable; and 3) even when an asset can be purchased, firms may stand to gain little from doing so, as the price paid for the asset fully capitalizes the rents from that asset (unless the firm is lucky or possesses superior information). It is this third point that is central to this paper: can the acquisition of a firm create value beyond the competitive market price paid for the acquisition? Acquisition of related resources between the acquirer and the target might account for enhanced performance of the combined entity. Premiums paid to gain control of the target underestimate potential synergies that could be gained from relatedness, namely in consolidation-oriented acquisitions (Singh and Zollo, 2000).
Identifying Global Leadership Competencies: An Exploratory Study
Cristina Moro Bueno, Grupo Antolin, North America
Dr. Stewart L. Tubbs, Eastern Michigan University, Michigan
The influence of globalization and technology requires new business paradigms and new leadership competencies. The goal of this study was two-fold. First, to test the Global Leadership Competencies Model developed by Chin, Gu, and Tubbs (2001) and secondly to identify Global Leadership Competencies. The model consists of a pyramidal hierarchy that represents developmental phases analogous to Maslow’s need hierarchy. The phases are (1) ignorance, (2) awareness, (3) understanding, (4) appreciation, (5) acceptance/internalization, and (6) transformation as leaders mature as a result of their international experiences. For this qualitative study, 26 interviews were conducted with international leaders from several countries whose average international expatriate experience was 48 months. Results obtained demonstrated that the model was predictive. The results presented also indicate that leaders consider the following to be some of the most important global leadership competencies: (1) communication skills, (2) motivation to learn, (3) flexibility, (4) open-mindedness (5) respect for others, and (6) sensitivity. For the full text see; Cristina Bueno. Global Leadership Competencies (GLC) Model. MBA Thesis, Eastern Michigan University, Ypsilanti, Michigan, 2003. The research in leadership development has recently turned toward identifying leadership competencies (knowledge, skills, abilities and behaviors); Charan, Drotter and Noel, (2001); Fulmer and Goldsmith, (2001); Goleman, Boyatzis and McKee, (2002); Tubbs and Moss, (2003); Tubbs, (2004); Vicere and Fulmer, (1997). The logic is that once the competencies can be identified, the leadership development process can more effectively focus on improving the deficiencies identified in each individual. It is also known that all leadership occurs in some context. The word competency comes from a Latin word which means “suitable.” An individual’s competency refers to an individual’s ability to respond to the demands placed on them by their environment. The most important leadership competencies are those that can best transfer across cultures, both within organizations and from one country to another, Acuff, (1997); Deal and Kennedy, (1982); McGee, (2003); Rogers, (1995); Trompenaars and Hampden-Turner (1998). The purpose of this study was to further investigate the Global Leadership Competencies Model developed by Chin, Gu and Tubbs (2001). It was an attempt to advance the research and development on this topic. The purpose was to study different leadership styles and to identify the competencies required for global effectiveness. Leaders must improve from deficiency levels to competency levels in order to succeed in conducting international business. (See also Rosen, et. al., 20000, and Hampden-Turner, et. al, 2000).
A Conceptual Model for Operations-Analytics Convergence
Dr. Joseph O. Chan, Roosevelt University, Schaumburg, IL
As businesses are trying to differentiate themselves in a competitive market, leveraging business intelligence to enhance business operations has become a top priority in business strategies. In spite of huge investments in technologies, companies are often not reaping the anticipated benefits. The disconnection between analytics and operations prohibits the effective execution of business strategies. A critical success factor in gaining competitive advantage is the ability to apply the right analytics at the right time, to the right people, at the right place and under the right situation. This paper proposes a conceptual framework for the convergence of operations and analytics through the construct of the operational-analytic classes for an enterprise. It further describes an enterprise model for operations-analytics convergence. Applications of the model to customer relationship management and supply chain management are explored. The capability to adapt and respond to changes quickly is a necessary condition for businesses to compete in the global economy. Making the right decision at the right time has taken special meaning in the world driven by real-time processing of information where the right time is typically now. The traditional planning-execution cycle is getting shorter and the line between analytics and operations is becoming blurred. Sub-optimal performance may result from not being able to apply the right analytical results in real-time operational situations. In today’s crowded market, traditional strategies in operational efficiencies, product excellence and price may not by themselves provide the necessary competitive advantage. Businesses are extending their value chain to include their customers, suppliers and alliance partners. The ability to coordinate business intelligence and operations across the extended enterprise in real-time is critical for businesses to be adaptive and responsive to changes quickly and effectively. While business intelligence (BI) technologies are adopted in various degrees by most companies, many fail to reap the anticipated benefits of improving operations. Critical issues include the inability to integrate disparate systems, and the inability to integrate analytics with operations. A company equipped with the most sophisticated and expensive business intelligence technology could be rendered totally incapable in operations if the analytics are not applied at the right time, to the right people, at the right place and under the right situation. Application software vendors are responding to the demand of analytics to support real-time operations by incorporating analytics into their application suites. Thus a marketing application has its suite of marketing analytics and a customer application has its suite of customer analytics, and so on. However, an enterprise operation requires the correlation of analytics from different application areas. This paper examines the need of analytics in the improvement of business operations and proposes a conceptual framework for the convergence of operations and analytics at the enterprise level by defining operational-analytic classes for an enterprise. The operational-analytic classes facilitate better business operations by knowing what analytic processes to deploy for the respective operational processes. An enterprise model is described for the operations-analytics convergence. Applications of the convergence models in customer relationship management and supply chain management are explored. Related trends in technology are examined. Business analytics and operations nourish each other in a symbiotic relationship. Analytics creates the business intelligence for better operations, and conversely, data captured in operations provide the inputs for analytic processing. The capability of leveraging analytics in operations can be a critical differentiator for a business to stay competitive. In the following, we examine how analytics can improve business operations in the critical areas of customer relationship management, marketing and sales, and supply chain management.
An Analysis of the Incentives to Licensing in U.S. Information Technology
Dr. YoungJun Kim, The George Washington University, Washington D.C.
This paper investigates the validity of the potential factors that might affect the incentives of companies to license out their technology. Empirical analysis is provided with the help of a panel data set of observed licensing transactions worldwide involving information technology (IT) companies publicly traded in the United States. Our results show that transaction cost, market competition, and knowledge appropriability considerations weigh in heavily in explaining the licensing behavior. The important explanatory factors relate to the firm’s prior involvement in technology licensing, the industry concentration, the sales growth and the propensity to receive patents in the primary industry of the company. Company’s stock of technological knowledge (patent), the company size and R&D intensity also play a key role in determining manager’s licensing incentive. There is anecdotal evidence that market for technology is less developed than socially desirable and not well functioned. For example, a study by British Technology Group found that large companies in the United States, Western Europe, and Japan ignore a large amount of their patented technologies, which could be licensed or profitably sold (British Technology Group, 1998). The inefficiency of market for technology is caused by a number of impediments it faces. The best-known obstacle to the efficient market for technology is the “appropriability problem”. In his early paper, Arrow (1962) argues that once an idea is disclosed to a potential buyer, it is possible for that buyer to use the information without paying for it. Because of this concern, a potential licensor would be reluctant to disclose the core of technology, depriving a potential licensee of the chance to evaluate it. Then, without being able to evaluate technology, buyers would be unwilling to buy the product. Thus, this leads to a typical “market failure”. Nelson and Winter (1982) point that innovation is largely the outcome of organizational routines, and hence is more effectively performed within organizations. “Cognitive” limitations in the transfer of technology to another context require extensive adaptations and costs (Arora and Gambardella, 1994). Additional difficulty arises in subdividing a given problem-solving task into subtasks. It can be difficult to partition the innovation process into independent tasks (Kline and Rosenberg, 1986; Von Hippel, 1990). There can also be problems in exchanging tacit knowledge and know-how through arm’s-length contracts such as moral hazard problem and asymmetric information between licensor and licensee (Caves, 1996; Hart, 1995; Menard, 1996). In spite of these impediments, however, there is also extensive evidence of the increasing use of licensing in technology-intensive industries. For instance, a recent study by Arora, Fosfuri and Gambardella (2001) shows that technology licensing transactions with a total value of over $320 billion, implying an average of nearly $25 billion per year occurred worldwide in the period 1985-1997. Thompson Financial’s SDC database used in this paper lists more than 10,000 publicly announced licensing agreements during the 1990s.
Global Terrorism: Past Present & Future
Dr. Jack N. Kondrasuk, Dan Bailey, and Matt Sheeks, University of Portland, Portland, OR
Terrorism has a deep history with instances of terrorist-type activities being recorded in the Bible, presently involving many of the world’s countries with all continents but Antarctica recording one or more terrorist attacks in the last year, and has many facets to examine. Those facets include knowing about the perpetrators and their goals, the targets, the weapons, the events, and the effects of those events. Presently, the United States is the top target in the World and is likely to be so for the foreseeable future. Al Qaeda is probably the top terrorist organization aiming at the U.S. with intentions of using more lethal weapons and producing severe damage. The future will probably see a significant reduction in ideological terrorism and increases in single-issue terrorist groups attacking major cities with weapons of mass destruction. To better prevent and respond to terrorism in the world it is necessary to understand its origins. For the U.S. September 11, 2001, when the Twin Towers in New York were destroyed by terrorists and over 3,000 people were killed, marked a rude awakening to terrorism. In Spain the bombings in Madrid significantly changed political policies. Australians found how terrorism could impact them from a bombing of a tourist restaurant in Bali. Russians were reminded of terrorism in their midst when hundreds were killed at a school in Beslan. Israel was reminded of terrorism every time a suicide bomber blew up a restaurant in Israel while Palestinians thought of terrorism when one of their leaders was assassinated. People throughout the world need to understand terrorism. The purpose of this paper is to enable us to understand terrorism so as to be better able to deal with it. To this end, this paper will look at the origins of terrorism in the world and how they have led to present day terrorism. . . and very importantly, what can we expect regarding terrorism in the future? Before we can answer questions about origin, prevalence and future, we must define our topic—“terrorism.” However, defining the term is difficult to do. Different people and different countries have different views of the same behaviors. A “terrorist” to one is an insurgent, guerilla, militant, rebel, revolutionary, freedom fighter, warrior, soldier, or hero to another. At the core of terrorism is uncertainty. It only seems fitting that there is also ambiguity and uncertainty regarding a definition for terrorism itself. The U.S. Department of State, while conceding that there is no universally accepted definition of “terrorism,” uses the definition of “premeditated, politically motivated violence perpetuated against non-combatant targets by sub-national groups or clandestine agents, usually intended to influence an audience.“ (U.S. Department of State, 2004a, p. 1). The Department of State goes on to describe foreign, as opposed to only intra-nation terrorist groups, and lists 40 Foreign Terrorist Organizations (FTO’s). The recent European Union definition of terrorism considered “terrorism as violent crimes aimed at seriously destabilizing or destroying the fundamental political, constitutional, economic or social structures of a country or an international organization" (Treanor, 2004). Considering different definitions of “terrorism,” the main aspects of global “terrorism” seem to include 1) a non-military group of people united in a common political, religious, or ideological cause, 2) who operate in a clandestine way, 3) without a publicly-known headquarters, 4) committing or threatening to commit acts of significant violence, 5) against those who oppose their views, and 6) are mainly civilians.
The Impact of Internal Control to E-commerce Activities on the Quality of Accounting Information in the Banks Operating in Jordan
Dr. Jamal Adel Al-Sharairi, Al Al-Bayt University, Amman, Jordan
The study aims to identify the impact of Security and protection, legislation and laws on the quality of accounting information in banks operating in Jordan, which are (22) banks. A questionnaire was designed by the two researchers and distributed for this purpose on the internal auditors in banks and non-executive committees emanating from the Board of Directors who have direct contact with internal audit in each bank, the number of questionnaires distributed were (150) questionnaires, (120) suitable questionnaires were recovered for analysis, with the rate of recovery reached (80%). The questionnaire data was analyzed using the (SPSS) and a number of statistical techniques through descriptive statistics, arithmetic means, standard deviations and percentages, the study hypotheses were tested by multiple regression tests. The study found that there was no significant impact for the combined independent variables (Security and protection , legislation and laws) on the quality of accounting information, but there is a statistically significant impact of Security and protection on the quality of its own accounting information. The study recommends the interest of existing and decision-makers in banks operating in Jordan to raise the level of legislation and laws in order to positively affect the quality of accounting information in those banks. Due to the need for accounting information which is constantly growing, especially after the emergence of companies diversified large, thus increasing the burden of information on banks, so that the assembly, operation and data processing methods can produce different information that meets the needs of its users from outside and within the economic units, and this requires a system of internal control to e-commerce activities with high efficiency, especially in the commercial banks as the incubator for companies capital sectors, and the leading commercial banks in the use of electronic commerce and assumed a lot of business. The information system of accounting is one of the subsystems of the information system, which works on the production data of a special nature, may contribute to the application of the requirements of internal control to e-commerce activities in banks operating in Jordan in the output of accounting information quality and efficiency of the replay, and could help meet the needs of management information on a continuous correct and appropriate, timely and meaningful and significant help in the process of planning, implementation and control of economic activities of the unit. The importance of the study of the impact of the importance of internal control requirements for e-commerce activities in the preparation and provision of accounting information characteristics, quality and highly efficient, enabling users to take information from the wise decisions and policy-making and future plans. The importance of this study is the first that touched on internal control and linked to e-commerce activities with the quality of accounting information. The problem with the study of the impact of the internal control requirements for the activities of electronic commerce in the banks operating in Jordan on the output of accounting information with high quality and appropriate and timely manner, allowing users to take advantage of that information, and rational decision-making, and planning and implementation.
The Metaphor Matrix: Improving Metaphor Usage in Management Education
Dr. John P. Meyer, Iona College, New Rochelle, NY
Dr. Theodore Schwartz, Iona College, New Rochelle, NY
In this article, we review the use of metaphor in management education, explore the shifting boundaries between metaphorical and literal language, and suggest some improvements that can be made to the use of metaphors in the management classroom. To that end, we demonstrate how the use of any single metaphor is insufficient and potentially misleading. It is better to introduce students to a wide range of metaphors – juxtaposing their strengths and weaknesses in a metaphor matrix – in order to present a more accurate picture. While the use of metaphors in management education is both widespread and ultimately very beneficial to understanding the abstract elements of organizations and illuminating the practice of management, there are dangers and consequences related to their application and misapplication. Based upon three approaches to dealing with the dichotomy between metaphorical language and literal language in management education – a “black and white” approach, a “shades of gray” approach, and a newly proposed “metaphor drift” approach – we suggest that it is possible to understand some of the common hazards of organizational metaphors and make even better use of the ever-expanding range of vivid organizational images in our teaching through the use of a metaphor matrix. When teaching in the field of management, figurative organizational representations such as metaphors are inevitable, given the complexity of the subject matter (Weick, 1989). There are boundaries of experience that literal language cannot easily cross, but that metaphorical language can breach (Hatch, 1999: 96). Dramatic descriptions of organizations as a living organism (e.g., Cafferata, 1982; Morgan, 1981) that grow and change give life to the subject of management. Organizational members as travelers on heroic quests and journeys (e.g., Boyce, 1995; Clancy, 1989) complete with steering committees and important milestones offer a sense of adventure to students of business. In practice, management metaphors can be used as guiding images of the future, as ways of increasing organizational effectiveness, as tools for organizational diagnosis, and as methods for simplifying the complexities of organizational life (cf. Cleary & Packard, 1992; Meyer, 2003; Palmer & Dunford, 1996). In the classroom, metaphors constitute an economical way of relaying primarily experiential information from the business world in a more vivid manner. In management theory, metaphors can provide significant insights about mechanisms that produce observable phenomena (Tsoukas, 1991). Given these manifold uses, some have gone as far as to claim that metaphors are the missing link between theory and practice and as such are essential elements of the teaching process (McCourt, 1997; Tsoukas, 1991; Weick, 1989). Behind their buildings, products, and employees, organizations themselves do not have a corporal presence. Therefore, metaphors derived from the perceptual or physical domain are helpful when conceptualizing them (cf., Hanby, 1999).
On Calling Convertible Securities – New Empirical Evidence
Dr. Camelia S. Rotaru, St. Edward’s University, Austin TX
This study shows that cash redemptions of convertible securities are associated with a positive abnormal market reaction on the day following the announcement. I am attributing this positive reaction to the coupon savings resulting from the redeeming the convertible security for cash. Results also show that companies do not force conversion of their securities in order to avoid financial distress. Rather, it appears that companies choose to force conversion even though the average amount of dividends paid on the common shares to be issued as a result of the conversion is higher than the amount of coupons saved through forced conversion. Moreover, results show no relationship between the abnormal return to a forced conversion announcement and the company’s probability of bankruptcy. The call provision allows the issuer to repurchase a bond at a specified call price before the maturity date. In most cases, the redemption price of a callable bond is initially set equal to the public offering price plus one year’s interest on the bond. However, the call price usually decreases over the life of the bond. The schedule of call prices typically scales the call premium to zero by a date up to one year prior to the maturity of the bonds. Previous studies look at the call feature of convertible securities. Some studies attempt to explain the role of call protection on convertible securities. Other studies analyze the stock price reaction to announcements of redemptions of convertible securities. However, despite the fact that most convertibles are callable, existing literature on the role of call protection in convertible securities is very limited. Some older papers try to determine whether call provisions have agency cost motivations, while others view convertibles as backdoor equity financing. Yet, all studies ignore the fact that probably managers’ rationale for using call provisions is that if offers them a way to force conversion. Reduction of the agency costs of debt by shortening its effective maturity has been offered as a motivation for call features associated with straight debt (Myers (1977), Barnea et al. (1980)). Stein (1992) suggests an inverse relationship between call protection covenants on straight debt issues and the financial leverage of the issuer. For firms with high leverage, the agency costs of debt are magnified, calling for shorter effective debt maturity. Since firms with high leverage may be in weak bargaining positions, they may be required to offer stronger (longer) call protection. In a more recent study, however, Lewis et al. (1999) find financial leverage and the length of the call protection to be unrelated. The question that arises is whether the agency cost motivation for the use of call provisions applies to convertible securities. Research shows that the agency cost motivation of the call feature receives only limited support in the case of convertible securities. Kahan and Yermack (1998) suggest that the conversion feature alone provides the desired control for a firm suffering from high agency costs of debt.
Momentum Strategies on Optioned and Non-Optioned Stocks
Dr. Keh-Yiing Chern, HedgeIQ Inc.
Dr. Susana Yu, Montclair State University
Dr. Kishore Tandon, Baruch College / CUNY
As implied by Mayhew and Mihov (2004), an ideal control sample of non-optioned stocks used in comparing or contrasting to optioned stocks must consider the liquidity and the volatility of these stocks. Following this and the study by Chan, Jegadeesh, and Lakonishok (1996), we study price and earnings momentum strategies on both optioned stocks and selected non-optioned stocks over 1983 to 2002. The purpose is to examine whether information issued by or related to non-optioned stocks have more information contents than similar information regarding optioned stocks. We find that both price and earnings momentum strategies can be applied more successfully in the sample of non-optioned stocks than in the sample of optioned stocks. This better performance in the sample of non-optioned stocks is associated with the wider spread of earnings estimate revisions between the winning portfolio and the losing portfolio, an indication of a richer information content regarding the sample of non-optioned stocks than that of the sample of optioned stocks. The evidence of return predictability constitutes a controversial aspect of the debate on market efficiency. Jegadeesh and Titman (1993) added a new twist to the literature by documenting that over an intermediate horizon of three to twelve months, past winners on average continue to outperform past losers, concluding that there is “momentum” in stock prices. Investment strategies that exploit such momentum, by buying past winning stocks and selling past losing stocks, predate the scientific evidence and have been implemented by many professional investors. One explanation for the success of momentum strategies is that the market responds only gradually to new information and that previous information provides an ongoing source of information about a firm’s prospects. Since optioned stocks tend to attract more attention from both analysts and investors seeking additional information, we expect that momentum strategies using analysts’ estimate revisions work differently in the two separate samples of stocks: optioned and non-optioned. This paper extends the existing literature by attempting to apply a fundamentally sound trading strategy on both optioned stocks and theoretically-better-matched non-optioned stocks in order to better understand how the availability of options impact the information flow. Availability of stock options is used as a proxy for the richness of informational environment. We hypothesize that momentum strategies work more favorably on the sample of non-optioned stocks than on the sample of optioned stocks. Financial analysts tend to revise their earnings forecasts gradually after quarterly earnings announcements by firms. This inertia in revising forecasts should be more evident for non-optioned stocks because of the less coverage by analysts. Overall, we find that both price and earnings momentum strategies can be applied more successfully to the sample of non-optioned stocks than the sample of optioned stocks. This better performance in the sample of non-optioned stocks is associated with the wider spread of earnings estimate revisions between the winning portfolio and the losing portfolio in the sample of non-optioned stocks, an indication of a richer information content regarding the non-optioned sample of stocks than that of the sample of optioned stocks.
Copyright 2000-2018. All rights reserved