BEST Scholarly Journals

____________________________________________________________________

Most Trusted.  Most Cited.  Most Read.

The Journal of American Academy of Business, Cambridge

Vol.  4 * Num.. 1 & 2 * March  2004

ISSN: 1540 – 7780    *     The Library of Congress, Washington, DC

All submissions are subject to a double blind peer review process.

 

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription

Submit Paper     *     Editors / Reviewers     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members / Participating Universities     *     Publication Policies     *     Jaabc Library     *     Code of Publication Ethics

 

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a two person blind peer review process.

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: jaabc1@aol.com; Website, www.jaabc.com  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2016. All Rights Reserved

Urban Sprawl: Myth or Reality?

Dr. Tyler Yu, Mercer University, Atlanta, GA

Dr. Victoria Johnson, Mercer University, Atlanta, GA

Dr. Miranda Zhang, Mercer University, Atlanta, GA

 

ABSTRACT

Urban sprawl has become a symbol for a number of social and economic maladies plaguing the public and private sectors, as well as a target for those wishing to cure those contemporary ills.  Perspectives differ on whether it should be contained, encouraged or ignored.  This paper discusses differing viewpoints and consequences of the ancient and ubiquitous conflict between growth and development and conservation and community.  In the United States, suburbanization and its corollary, urban sprawl, have increased dramatically and exponentially over the past twenty years.    To some observers, this transition has created a monster, which threatens the very heart of the legendary American dream.  As the country began its transition from an agricultural, rural economy to an industrialized, urban one, it was confronted with the inevitable social and economic challenges that occur with concentrated urbanization.  Historically, the beliefs that growth is a social good, and that bigger is better, have been central tenants of the American perspective. Undoubtedly, growth brings with it a concomitant increase in jobs, a thriving economy, and a larger tax base.  Theoretically, it also creates a more equitable distribution of benefits and burdens.  Consequently, as more affluence is obtained, citizens migrate out of a central city environment to outlying suburbs in an attempt to improve their quality of life.   It is then assumed that the good life will necessarily follow.   That is, the new environment will provide safety, clean air, privacy, pristine natural areas, and better educational systems.   However, in the waning years of the 20th century, this assumption was met with increasing skepticism.  And at the dawn of the 21st century, this skepticism has become a mainstream political concern, resulting in a frontal assault on the cherished beliefs. Empirical data, as well as anecdotal reports, describe too many cars for too few roads; increased pollution; failing schools; increased crime, and loss of the natural environment.   The entrenched belief in the paradigm of continuous growth and constant acquisition now competes vigorously with a new paradigm revolving around issues of life quality.   Currently, the trends signal nostalgic demands for less populated, less automobile dependent, multi-purpose residential communities, and the rise of a New Urbanism.  Citizens want convenience and privacy, in addition to a sense of community and a sense of place ( Leinberger, 1998).  Urban sprawl has thus become a symbol for a number of social and economic maladies plaguing the public and private sectors, as well as a target for those wishing to cure those contemporary ills.  According to State Resource Strategies, a Washington consulting firm focusing on growth issues, nineteen states voted on more than 200 state and local initiatives to protect and preserve parks, open space, farmland, historic buildings and watersheds.  Nearly 70% of these initiatives were approved.  Depending on one’s viewpoint, it is a foe to be vanquished, a disease to be cured, or simply the price to be paid for continuing progress.  And, these myriad perspectives continue to be discussed regularly in various forums (Hart, 2003; Hairston, 2003). Despite the emergence of new perspectives and of powerful and knowledgeable champions for taming sprawl, a clear consensus on the nature and scope of urban sprawl nevertheless remains elusive.  Gordon and Richardson (1998) maintain that the sprawl debate is distorted by a high degree of misinformation.  This misinformation has partially contributed to a deep chasm between those who believe that suburbanization is a spontaneous consequence of economic development and the anti-sprawl activists.   A clear consensus, therefore, has not emerged as part of the debate over urban sprawl or suburbanization.  This ambiguity presents a barrier to understanding and controlling the causes and consequences of the issue.  The definitional ambiguity is exacerbated by the prevalent ethic, which shapes societal expectations.  The belief that hard work, responsibility and duty will lead to prosperity is rooted deeply in the minds of American citizens.  Furthermore, this ethic is firmly supported with an abundance of empirical evidence. The principle of consumer sovereignty has been a critical component in the increase in America’s wealth and in the welfare of its citizens.   Better employment leads to higher salaries and more discretionary income.  Individuals become more acquisitive and expect a better lifestyle.  Individuals seeking a more bucolic and aesthetic setting leave the established central city, which is seen as crowded, dirty, noisy, unsafe, unhealthy and unresponsive.    When buying a house in the suburbs, citizens believe they are actually purchasing good public schools; relative safety; buffers from neighbors; access to recreational and shopping venues; and low taxes.   

 

Economic Growth In Transitional Versus Industrial Economies: A Case of the Baltic Sea Region

Dr. Tiiu Paas, University of Tartu, Tartu, Estonia

Dr. Egle Tafenau, University of Tartu, Tartu, Estonia

Dr. Nancy J. Scannell, University of Illinois at Springfield, Springfield, Illinois

 

ABSTRACT

This paper tests whether differences in growth factors exist between the transitional versus industrialized countries of the Baltic Sea Region. Model estimations indicate that significant growth factors are, unlike for industrial countries, in accordance with theory for transitional countries. Economic growth determinants are apparently dissimilar across the two groups. It is expected, however, that in light of ongoing processes of European integration and economic development of the transitional countries, these discrepancies will diminish. Economic growth, albeit attributed with exacerbating income inequality and environmental harm, is one of the salient aspects of economic development and, ultimately, is assumed to promote the enhancement of societal welfare. In this paper, economic growth is understood to mean the growth of real gross domestic product (GDP) or, alternatively, the growth of per capita GDP. The Baltic Sea Region (BSR) has become one of the most competitive economic regions in Europe in the recent decade. BSR countries are strategically situated vis-a-vis the leading world economies (World Competitiveness Yearbook, 2002). Due to its favorable location between the East and West and given the dynamics of interdependencies between its transition and integration phases, the BSR is poised for rapid economic growth. The literature underscores BSR's uniqueness as derived from its concentration of business activity, its high attainment of economic development, civilization and prosperity, and its non-homogeneity (Kisiel-Lowczyc, 2000; Peschel, 1998).  The BSR is composed of two characteristically distinct sets of countries. Estonia, Latvia, Lithuania, Poland and Russia constitute the transitional nations, while Finland, Sweden, Denmark, Norway and Germany comprise the industrialized nations of the BSR. Inhabitants of industrialized countries command relatively strong purchasing power, and, thus, domestic demand prominently contributes to growth along with foreign demand. In contrast, given the relatively diminutive purchasing power of transitional countries' populations, economic growth in transition countries is driven more so by foreign, rather than domestic, demand.  Economic linkages in terms of foreign trade among the countries of the BSR are notably strong, as evidenced in part by substantial foreign trade activity extant within the region. For the transitional countries of the region, the inter-regional economic linkages are especially strong; in addition to foreign trade, they experience a large measure of foreign direct investment, primarily originating from the Nordic countries and Germany.  On the basis of their respective relationship with the European Union (EU), BSR countries can be disaggregated into three groups. To wit, all the transitional countries except for Russia negotiated for membership with the EU, and all industrialized countries except for Norway have already secured membership with the EU. It follows that a third designation emerges consisting of the aforementioned exceptions, namely Norway and Russia, which are neither members of the EU nor currently advancing upon EU accession. The BSR countries are non-homogenous in terms of country size and relationship with the EU, as well as in terms of levels of both economic development and purchasing power parity. Four of the five transitional BSR countries, Estonia, Latvia, Lithuania and Poland, are candidate countries of EU's eastward enlargement initiative. Poland's large size contributes to its formidable presence in this process. The Baltic States (Estonia, Latvia, and Lithuania) are the only former Soviet Union countries among the EU accession countries. Economic and political developments of the largest country within the region, Russia, are not as readily predictable. From among the industrialized BSR countries, Germany is the largest and wields the heaviest influence on the region’s development. Furthermore, the industrialized economies of the BSR are among the 20 richest countries in the world, while the transitional economies lag behind. As seen in Table 1's figures for 1999, on the basis of purchasing power parity, the difference in national income per capita is approximately threefold, which is dwarfed by an approximate tenfold disparity when measured in current USD.  Figure 2 shows that the subsequent economic growth in transitional countries has been more rapid than that experienced in the industrialized countries, with the exception of Finland. (Finland responded to its initial decline in output by restructuring its economy and re-orientating itself towards western markets.) From among the transitional countries, Russia had faced the most serious obstacles in restructuring its economy and, thus, positive economic growth rates for the country were not evident until 1999.

 

A Note on the Impact Managerial Styles Can Have for Service Marketers: Some New Insights

Dr. Chaim Ehrman, Loyola University Chicago, Chicago, IL

 

ABSTRACT

In the field of list selection for service marketing, mail order houses typically offer prospective renters a systematic sampling of names per list to estimate expected return rates. Direct marketing is an alternative to e-service marketing since one can reach a wider audience, i.e. those consumers whose computers are not attached to the Internet or simply do not use the Internet. These estimates are the basis for list selection. In this paper, it is shown through Bayesian Analysis how this selection can be modified to accommodate the risk prone, risk averse and risk neutral decision-maker.  There has been an increasing interest in customer data base marketing, both at the business to business marketing as well as the consumer marketing (Kotler and Armstrong , 2001). Don Peppers and Martha Rogers (1993) contrast the typical mass marketing effort versus one-to-one marketing. In the latter, there is much more information regarding the customer data base, such as demographics, preferences and customer profiles.  In Direct Marketing, there is typically an initial communication with the consumer. Subsequently, the marketing message is tailor-made for a very narrow market segment, known as your target market. The goal is receive a direct, measurable response from the consumer from this segment (Kotler and Armstrong, 2001).  Unfortunately, the primary focus of researchers has been on the needs of the customer, both as a consumer and as a business purchaser. However, there is another significant component in list selection that is overlooked in terms of deterministic decision style (Lee et al., 1981). It is known that management style can be risk prone, risk neutral or risk averse (Green, Tull and Albaum , 1988). Thus, the marketing managerial decision process that is used to determine which market segment should be targeted is a function both of what can be delivered in terms of want satisfaction, as well as the willingness of management to assume risk. Further, probabilistic issues can also be included, e.g., what is the probability that the risk prone decision maker will remain risk prone for the purchase of a given list? A direct marketer may be very confident on a higher response rate for some products.  A more general question that should be addressed is, what are some motivations for managers to be risk prone or risk averse. Are these simply personality traits or perhaps based on forecasts of economic growth, stagnation or recession?  In this paper, a real-life example with real response data has been selected to illustrate how decision making styles as well as probabilistic issues can be incorporated in a direct marketing scenario for service marketing. Our focus will be on selling life insurance, but it can be readily modified to include other services. The organization is follows.  In Section 1, the direct marketing process for this service is described in detail. Section 2 shows how one can accommodate different managerial styles of the decision maker, assuming a deterministic scenario. Section 3 incorporates a probabilistic model. Section 4 uses a Bayesian approach to incorporate both prior probabilities as well as conditional probabilities on decision styles of the decision maker.  In the field of selling Insurance (Life, Health, Disability, etc.) a 2-stage procedure is not uncommon. Initially, a salesman would attempt (typically via phone call) to get an appointment with a prospective client. At the face-to-face meeting, the presentation for insurance is typically tailor-made to meet the unmet needs of the consumer. One strategy for selling is the programmed selling, approach, e.g.,Attention; Interest;Desire; Action;( known as “AIDA,” Schewe and Smith, 1983). The key selling for Insurance may vary, depending on the consumer. For instance, with respect to Life Insurance, some are interested in the attribute of long term financial security, others are concerned in the attribute of providing for their loved ones in case of disaster, others want forced saving for a defined goal such as education expenses for their children, etc. Market segmentation would clearly be appropriate.  The initial phase is based on Telemarketing, in which the salesperson attempts to get an appointment with the prospective buyer. Given the different demographics per list, a salesperson my want to test response rates for several lists before making the decision which list should be selected.  This problem was presented to the Suss Life Insurance firm. They have access to three lists from which salesmen can select.  All salesmen were given the choice to use any combination of these lists, following the recommendation of Seybold and Marshak (1998) who claim that everyone in the company should have access to the complete customer picture.   There are 3 lists available.  The cold call list is essentially a list in which the salesman has no prior contact with the potential client. The referrals list is clients who have been suggested by a customer of the Suss Company. Referrals create the most powerful form of advertising known to mankind: Word-of-Mouth. Over 50% of American business is based on this (verbal) ad form (Gitomer, 1998).  The Social/Fraternity list is a list of names in which there is familiarity with the Suss Company (members of the list go to the same Church or Synagogue as the Suss Company executives) but no direct referral has been suggested. All salesmen preferred the referral list. It generates the least amount of frustration for the salesmen, which crucial for a good sales (Bly, 1991).  Consider the following problem in list selection. Direct Marketers typically ask a mailing house to select a systematic sample from several Mailing Lists to estimate the response rate from each list. The list with the highest response rate, which is a point estimate for the aggregate response rate is selected for the target market.  The obvious problem affecting the decision maker is that a point estimate is used to determine the performance potential of response rates for each list.

 

Using the Technological Readiness Audit sm for Strategic Competitive Advantage

Dr. Robin Widgery, Eastern Michigan University, Ypsilanti, MI

Dr. Stewart L. Tubbs, Eastern Michigan University, Ypsilanti, MI

David Nicholson, Eastern Michigan University, Ypsilanti, MI

 

What is your company worth?  The human resources literature reminds us that the value of organizations consists of far more than those items on the balance sheet. How much value is represented in the abilities of your employees to work well as a team, to have good communication skills, to be well motivated, to know when to lead and when to follow?  Moreover, what is the value of highly skilled people in your company - people who know how to get the best results from the various technologies that are listed on the "hard" assets side of the ledger? Unlike assets that wear out over time, the value of these human resources, if managed wisely, can continue to grow year to year.  The continuous development of human resources should become a key element of every organizations strategic plan. Managed properly these resources can become a powerful competitive advantage. While some organizations relegate training and development activities to secondary status, these activities should be viewed as a critical strategic function that contributes directly to the bottom line, and has quick and sure impact on the company's competitive potency.  A strategic human resource development perspective should include the following planning components: (1) a plan fully integrated with the organization's overall business plan and its vision statement, (2) an employee development plan designed to meet the unique needs of the current and future organization and its various functions, (3) a plan created to anticipate changes in technological applications within the organization, (4) a plan designed to meet the competitive demands of the organization, and (5) a plan targe-ted to meet the individual training needs within specific functions and specific locations.  To accomplish all these criteria the Technological Readiness Auditsm was developed a few years ago for The Ford Motor Company's North American Operations.  It was created to pinpoint, within specific plants, the exact types of training needed by their engineering and managerial staff in order to prepare them to integrate, as quickly and smoothly as possible, several billion dollars of new processes and technologies.  The Audit was seen as a critical strategic activity that would enable Ford to make a quick and giant leap to a higher plateau of technological competitiveness.  They accomplished this by insuring that on the day the new machines and methods were introduced, the personnel involved were already up to speed on how to get maximum performance from each new technology.  This method of preparedness has now been adapted for the needs of small and medium sized business organizations.  It is a valuable tool for creating more effective personnel development strategies - strategies that anticipate the future and efficiently turn knowledge and skill liabilities into assets.  The Technological Readiness Auditsm is a method of diagnosing the training and development needs of employees in order to identify those competencies and skills that need to be strengthened to help assure greater effectiveness in current and future job performance. This assessment method identifies the exact types of training and develop-ment activities needed by all professional and support personnel within the organization. The Audit includes every skill, knowledge, method, process, and technology required throughout the workforce.  What does the Technological Readiness Auditsm actually do?  There are six key objectives, which are achieved during the Audit process.  This process is primarily designed to identify all competencies used in the organization and then to define which parts of the organization have skill liabilities today and/or likely liabilities in the future.  The resulting analysis provides the company with a highly specific picture of human resource assets and liabilities.  Moreover, it provides an HR utilization profile for current conditions and an examination of future HR recruiting and training needs. The six objectives of the Audit include the following: An inventory of current knowledge, skills, methods, processes and technologies  needed by managerial, professional, engineering and technical staff. A projection of the knowledge, skills, methods, processes and technologies needed for the future. The identification of core competencies required for all job classifications. The design of a targeted training curriculum and development strategy for each  functional area audited. Assistance with the smooth and efficient integration of new and more competitive  technologies to strengthen the company's position of readiness to compete with "cutting edge" competencies and skills. The identification of the critical recruiting needs of the organization and the  profiling of the efficient utilization of human resources.  The Audit begins with the involvement of key individuals who are experienced in each of the functions selected for the assessment. 

 

Auditing E-Business: New Challenges for External Auditors

Dr. Ahmad A. Abu-Musa, Tanta University, Egypt

 

ABSTRACT

Electronic Business (E-Business) is a dynamic set of technologies, applications, and business processes that link companies, customers, and communities through the electronic exchange of goods, services, transactions and information. E-Business technology is rapidly changing the way that companies buy, sell, and deal with customers and partners. E-Business is becoming an important business tool since many companies are using the Internet to conduct their business. The dramatic evolution of information technology and the continuous decline in prices encourage many companies to automate their accounting information systems and to adopt E-Business in order to gain competitive advantages in the market. E-Business brings new challenges to conventional external auditors and the audit profession. External auditors need to understand how the advanced technology affects their audit process. Adequate planning of E-Business audit procedures becomes critical because most of the audit evidence might be available only in electronic form. External auditors should be able to evaluate the adequacy and accuracy of the electronic audit evidence. External auditors need to judge the validity, completeness, and integrity of accounting records; and the ability of the company to satisfy the going concern assumption. External auditors should also acquire the technical skills necessary to audit E-Business and maintain independence to enhance the profession’s credibility. They should also explore the possibilities and opportunities of using information technology and data analysis software. This paper examines the auditing process of E-Business as a new challenge to traditional external auditors and the audit profession. This paper also proposes some suggestions to help external auditors in facing these challenges effectively.  Electronic Business (E-Business) technologies are rapidly changing the way companies buy, sell, and service customers and collaborate with partners. E-Business involves all kinds of commercial activities performed across computer platforms and applications, including direct selling (e-tailing), customer relationship management, supply chain management, and the use of the Internet as the medium for conducting business transactions. The dramatic development of Internet technologies and the continuous decline of their prices have made E-Business applications more affordable, and encourage many companies of all sizes to implement them. These include not only business to business (B2B) operations but also business to consumer (B2C), business to government (B2G), and business to employee (B2E). Recognizing that various issues for auditors are emerging from these developments, the Auditing Practices Board published appropriate guidance in April 2001: Bulletin 3, E-Business: Identifying Financial Statement Risks (Billing, 2001; and Price, 2001).  E-Business brings new challenges to external auditors and the audit profession. External auditors need to understand how the advanced technology affects their audit process. Adequate planning of E-Business audit procedures becomes critical because most of the audit evidence might be available only in electronic form. External auditors should be able to evaluate the adequacy and accuracy of the electronic audit evidence. External auditors need to judge the validity, completeness, and integrity of accounting records; and the ability of the company to satisfy the going concern assumption. External auditors should also acquire the technical skills necessary to audit E-Business and maintain independence to enhance the profession’s credibility. They should also explore the possibilities and opportunities of using information technology and data analysis software.  This paper implemented the inductive approach to examine the auditing process of E-Business as a new challenge to traditional external auditors and the audit profession. This paper also proposes some suggestions to help external auditors in facing these challenges effectively. The paper consists of ten sections. The second section discusses the E-Business opportunities and perceived risks; the third section introduces the changing role of external auditors in the information technology environment; the forth section highlights the E-Business audit objectives; while the fifth sections discusses the planning of E-Business and collecting and evaluating of audit evidence. Section six presents the necessary knowledge and technical training needed to audit E-Business, while section seven introduces the independence of external auditors; section eight deals with evaluation of internal controls and evidential matters; while section nine introduces the auditor's consideration of an entity's ability to continue as a going concern; and finally section ten is the conclusion of the paper.   The International Federation of Accountants (IFAC) (2002) argued that the use of the term E-Commerce has already been superseded by the term E-Business. E-Commerce can be described as the procurement and distribution of goods and services over the Internet using digital technology. The more encompassing term E-Business can be defined as one that includes all activities carried on by a business via the Internet. This definition for E-Business extends beyond the definition of e-commerce by encompassing a digital approach to the whole enterprise, including other parts of the IT system and other non-transactional activities, such as recruiting employees via the Internet (Figure 1). The Information Systems Audit and Control Association (ISACA) (2002) confirmed that the term e-commerce is used by different parties to mean different things.

 

Making "Good" Decisions: What Intuitive Physics Reveals About the Failure of Intuition

Dr. Jeff W. Trailer, California State University, Chico, CA

Dr. James F. Morgan, California State University, Chico, CA

 

ABSTRACT

This study examines the intuitive accuracy that people achieve in predicting the motion of objects (intuitive physics). Policy capturing was used to identify each subject's judgment method.  Then, individual decision policies were dissected mathematically, using the Lens Model equation, to determine the source of judgment errors.  The results support previous research that finds intuitive judgments to be generally inaccurate, but goes further by diagnosing how intuition fails.  Should you trust your gut?  Bonabeau (2003) argues a cautionary “no,” however many articles have proclaimed the virtues of intuition in managerial decision making (Mintzberg, 1976; Harper, 1988; Agor, 1989).  Reasons for focusing on intuition include: an increased need for visionary thinking, inspired leadership, and complex imaging (Butts, Whitty, & McDonald, 1991), a need to understand how experts make decisions quickly (Simon, 1987), and a need to reduce reliance on time consuming systematic analysis (Behling & Eckel, 1991).  There is evidence that the role of intuition is being taken quite seriously, as many companies are paying to have their employees taught how to enhance their intuitive decision making (Agor, 1988; Block, 1990, Staff, 2002).  Thus, corporations are demonstrating a willingness to commit resources for the express purpose of enhancing intuitive decision making.   This makes the study of the efficacy of intuition relevant to the current needs of industry.  Classical economic theory assumes that decision makers follow a rational process (Simon, 1955).  However, other research has shown that cognitive biases often affect the decision process, resulting in inconsistent, non-optimal and faulty decisions (Edwards & Winterfeldt, 1986).  A significant reason why human decision making results in non-optimal decisions lies in the reliance on heuristic rules, or intuitive judgments, in lieu of optimal rules (Kahneman & Tversky, 1982).  In the real world, informational cues in the environment are often only probabilistically related to the criterion of the judgment task.  Accordingly, past experiments have researched the ability of human subjects to predict optimal outcomes of probabilistic events.  Thus, research in decision making under uncertainty focused on comparing intuitive judgments with expected outcomes determined via formal analytical probability models such as Bayes' Theorem (Hammond et al., 1980).  Although this type of analysis is important, the external validity of these studies is limited by the nature of uncertain events; that probabilistic events be measured against a generally accepted standard of rationality.  Unavoidably, the choice of any standard is subject to dispute by those who prefer a different standard (Hammond et al., 1986).  Conversely, there have been few studies that have documented the ability of human subjects to make accurate intuitive judgments of certain events.  A small field of research in this area does exist, however, in the study of intuitive physics (McCloskey et al., 1983; Levin et al., 1990; Krist, 2000, Sweeney and Sterman, 2000).  Here researchers investigate why people tend to hold an intuitive theory that is inconsistent with Newtonian mechanics.  The nomothetic methodologies used in such studies, however, do not allow the researcher to objectively partition each subject’s judgment accuracy into consistency versus knowledge problems.  Thus, it is unknown which component tends to fail in applied physics problems.  In this study “policy capturing,” an idiographic approach, is used in order to investigate why intuition often fails to produce accurate judgments.  The subjects for this research were undergraduate students enrolled in a business administration course.  The students completed the policy-capturing questionnaire as part of an in-class assignment.  Each student was instructed to complete the questionnaires and return them directly to the investigator, so that others never had access to the completed questionnaires.  Each participant was asked to complete the 10-page questionnaire, which included 50 different questions, within 1 hour.  The longest time any subject actually took was 35 minutes.  Thus, there was sufficient time available for all subjects to complete the questionnaire to their satisfaction.  Completion of the questionnaires was voluntary, and confidentiality was maintained by removing the cover sheet, which contained the student's name, from the questionnaire as it was turned in.  The cover sheets were retained by the course instructor as the basis to grant class credit for participation.  Seventy-seven students completed questionnaires.  Because of missing data, only 75 questionnaires were used in the analysis.  Because subjects are generally unable to accurately describe their own decision making methodology (Slovic & Lichtenstein, 1971), the best objective means of obtaining an individual's decision making method is to construct it by observing and recording the person's behavior over a range of specific, controlled situations (Hitt & Keats, 1984).  This "captures" the policy of the individual decision maker.  Thus, rather than examining the average behavior of groups of people making independent decisions, as in traditional between-subjects designs, policy capturing allows examination of the decision making behavior of individuals (Brehmer & Brehmer, 1988).  This policy capturing survey design completely crossed combinations of cue levels for the three independent variables, resulting in a total of 48 unique situations.  One repeated situation was added as a practice question for the subject, and it was not included in the analysis.  The 48 situations, the example question, and the practice question yielded a total of 50 situations listed on the questionnaire.  The situations were ordered randomly.

 

Agency Theory, National Culture and Management Control Systems

Dr. Samson Ekanayake, Deakin University, Victoria, Australia

 

ABSTRACT

Management control system of an organization is the structured facet of management, the formal vehicle by which the management process is executed.  In most organizations, systems exist for planning, organizing, directing, controlling and motivating.  Depending on the level of appropriateness and quality of the management control systems, the task of management is either facilitated or hindered.  The end goal of a management control system is achieving organizational objectives.  Because employees (agents) do not always give their best efforts for achieving organizational objectives, management control systems need to strive for aligning goals of agents (e.g., employees, subordinates) with that of principals (eg. senior management, owners).  Agency theory and its extension, principal-agent model, provide insights to the problem of goal congruence and suggest remedies, at least in the Western cultural context.  Whether the agency theory presumptions, predictions and prescriptions are universally applicable is an important issue in management. Their validity in different cultural contexts is largely unknown. The available literature to date indicates the possibility that agency theory may not be valid in non-western cultures. However, further empirical research is needed in non-western cultures to shed more light to this issue.  Agency Theory provides theoretical underpinnings for many research efforts in the disciplines of economics, management, marketing, finance, accounting and information systems. It is one of the most influential theories that underlie the bulk of the corporate governance and management control research in the Western world.  Fundamental to agency theory is the assumption that agents are opportunistic and will always engage in self-serving behaviour if opportunities arise.  Accordingly, the role of control systems (e.g., structures , procedures, information systems, monitoring, performance evaluation, rewards, penalties) is to help principals in curbing opportunistic behaviour of agents by reducing opportunities and incentives for such behaviour.  This paper discusses the main characteristics of agency theory (and its extension, the principal-agent model) and identifies a number of Management Control System (MCS) design questions that may be examined using an agency theory perspective.  Based on the arguments of management scholars for a cultural difference in management style in Asia (e.g., Nanayakkara, 1992; Wijewardena and Wimalasiri, 1996), and based on the limited empirical research available (e.g., O’Connor and Ekanayake, 1997; Roth and O’Donnell, 1996; Sharp and Salter, 1997; Taylor, 1995), the paper also sounds the possibility that agency theory assumptions may not be valid in Asia. The objective of this paper is to encourage further research into the applicability of agency theory for the study of management control issues of organisations in Asian societies  Agency theory is concerned with the ‘agency problem’ that exists when there is an agency relationship. In an agency relationship one party (the principal) delegates decisions and/or work to another (the agent). The agency problem occurs because the agent has goals that are different from the principal’s (Jensen and Meckling, 1976; Ross, 1973). The premise of agency theory is that agents are self-interested, risk-averse, rational actors, who always attempt to exert less effort (moral hazards) and project higher capabilities and skills than they actually have (adverse selection). Agency theory attempts to resolve two problems relating to the agency problem. The first is the monitoring problem that arises because the principal cannot verify whether the agent has behaved appropriately.  The second is the problem of risk sharing (particularly in case of outcome-based controls) that arises when the principal and the agent have different attitudes towards risk (Eisenhardt, 1989).  Agency Theory is split into two camps (Eisenhardt 1989, Jensen 1983). The first camp (positivist research) has “focused on identifying situations in which the principal and agent are likely to have conflicting goals and then describing the governance mechanisms that limit the agent’s self-serving behavior” (Eisenhardt, 1989, p.59).  Positivist agency research is almost exclusively concerned with the goal conflicts between owners (shareholders) and managers. Along the positivist line, Jensen and Meckling (1976) examined how equity ownership by management help align goals of the managers with those of owners; Fama (1980) examined the role of capital and labour markets in controlling the behaviour of managers; Fama and Jensen (1983) examined the role of the board of directors as a monitoring devise.  The second camp (the principal-agent research) believes in a general theory of principal-agent relationship applicable to employer-employee, lawyer-client, buyer-supplier relationships.  According to Eisenhardt,  the “positivist theory identifies various contract alternatives, and principal-agent theory indicates which contract is the most efficient under varying levels of outcome uncertainty, risk aversion, information and other variables” (1989, p. 60).  Agency theory has grown beyond its original positivist domain and has been used by (principal-agent) researchers in a number of disciplines to study issues that arise from agency-like relationships, for example, superior-subordinate relationships.  The widespread use of agency theory (both positivist and principal-agent) can be attributed to the appeal of the model due to its assumptions about people (e.g., self-interest, bounded rationality, risk aversion), organizations (e.g., goal conflict among members), and information (e.g., as a commodity and a monitoring devise) (Eisenhardt, 1989).   Agency theory attempts to depict human behaviour in organizations.  Jensen (1983) describes agency theory as a powerful theory of organizations. 

 

Consumer Protection in E-Commerce: Analyzing the Statutes in Malaysia

Sarabdeen Jawahitha, Multimedia University, Malaysia

 

ABSTRACT

The bearing of e-commerce especially the retail businesses on the Internet have created opportunities to the consumers including Malaysian consumers to transact online with comfort and convenience. Nevertheless, the ultimate success of e-commerce will depend to great extent on the interest and confidence of consumers. There have been several legislation passed to protect the consumers in the traditional market place, many of which can be useful in electronic market place. However, the nature of electronic environment requires new laws or amendment of the existing laws to address the new challenges posed by this new medium of transaction.  Issues like the applicable law to consumer contracts for supply of goods and services made via Internet, the legality of collection of consumer data without express consent from the data subject and the suitable courts to decide on an e-consumer dispute always create anxiety among the e-consumers. Many countries have taken initiatives to address these issues to build up consumer confidence. This paper is an attempt to examine Malaysian legislative framework and analyze its adequacy to address these anxieties and the protection in preserving the interest of the e-consumers.  Consumer protection has been a problem ever since the outset of trading 10,000 years ago. However, the explosive growth in cyberspace has led to some new problems and challenges for consumer protection(1). Cyberspace used to mean limited access to text-based e-mail and reference material for the few who had Internet access through universities and government agencies via online services such as Compuserve, American Online, and Prodigy. Today, however, direct access to the Internet, the worldwide web in particular, plus access to the Internet through the online services, makes the online services small players in the cyberspace, thus making it difficult to hold them liable for any violation of consumer protection laws.  The success of e-commerce to the benefit of both businesses and consumers is, in important respect, dependent upon the adequacy of the laws governing consumer transactions. Legislation, in most of the countries in the world including Malaysia, was enacted as a response to imbalance in the marketplace in 1960’s, 70’s, and 80’s(2).  As such the adequacy of these legislation to meet the basic needs of the online consumers is questionable. This paper analyzes the existing Malaysian legislation protecting e-consumers and their suitability to e-commerce environment.  “Consumer” means a customer, including a licensee, subscriber, or buyer of any goods or services acting primarily in a personal family or household capacity, other than for purpose of resale. According to Lectric Law Library’s Lexicon on Consumer, a “consumer” is defined as individual who purchases, uses, maintains and disposes of products and services. “Consumer” as defined in the Malaysian Consumer Protection Act 1999 means that a person who acquires or uses goods or services of a kind originally acquired for personal, domestic or household purpose, use or consumption; and not for the purpose of trading, or manufacturing consumption(3). The definitions are restrictive in nature; they excluded a person from the ambit of “consumer” if he had purchased goods for resale or other than the purposes mentioned. The applicability of these definitions in the context of e-commerce yet is to be seen.  “Consumer protection” has been around since the Middle Ages; the earliest forms of consumer protection were designed to discourage fraudulent trading practices and to protect the consumers from danger (4). “Consumer protection” as highlighted by the United Kingdom Office of Fair Trade for offline commercial activities, includes basic legal rights that a consumer will have when he buys or hires goods or services. Accordingly, there are three basic legal rights accorded to consumers:  the goods and services must be of satisfactory quality;  the goods and services must fit for their purposes; and the goods and services must be as described.(5)  Unfortunately the Malaysian Consumer Protection Act 1999 does not offer any specific definition on “consumer protection”. However, the Act provides for protection against unscrupulous traders of traditional market place. Misleading and deceptive conduct, false representation and unfair practices are outlawed under this legislation (5).   “E-commerce” refers to all forms of business transactions conducted over public and private computer networks. It is based on electronic processing and transmission of data, text, sound and video. E-commerce includes transactions within global Information Economy such as electronic trading of goods and services, online delivery of digital content, electronic fund transfer, electronic share trading, electronic bills of lading, commercial auction, collaborative designs, engineering and manufacturing, online sourcing, public procurement, direct consumer marketing and after sales services. It includes both products (consumer goods, specialized medical equipment) and services (information services, financial and legal services; traditional services and new activities virtual malls).

 

Technical Efficiencies of Rice Farms in Thailand: A Non-Parametric Approach

Dr. Wirat Krasachat, King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand

 

ABSTRACT

The purpose of this study is to measure and investigate technical efficiency in rice farms in Thailand. This study decomposes technical efficiency into its technical and scale components. In past studies, efficiency analyses have involved econometric methods. In this study, the data envelopment analysis (DEA) approach and farm-level cross-sectional survey data of Thai rice farms in 1999 are used. A Tobit regression is used to explain the likelihood of changes in inefficiencies by farm-specific factors. The empirical findings indicate a wide diversity of efficiencies from farm to farm and also suggest that the diversity of natural resources has had an influence on technical efficiency in Thai rice farms.  Only a few decades ago, rice was not only the most important crop in Thai agriculture but was also the backbone of the Thai economy. At present, despite considerable diversification into upland crops, rice continues to be the most important commodity in Thai agriculture. In 1999, about 50 per cent of farm land was planted to rice (Ministry of Agriculture and Cooperatives 2002). This is because rice is not only the staple food of Thai people but also a cash crop for the majority of Thai farmers.  In 1999, 10.51 million hectares were planted to rice, 24.17 million tonnes were produced and 6.84 million tonnes were exported. Thailand has four regions. Based on 2001/02 crop year data, the main output was contributed by the Northeastern Region followed by the Central Region. The highest yields stemmed from the Central Region (Ministry of Agriculture and Cooperatives 2002).  There are at least four causes for worry concerning the future development of rice farms in Thailand. First, the relatively high growth rate of rice production in Thailand has been achieved mainly through the expansion of cultivated areas (Ministry of Agriculture and Cooperatives 2002). Second, although, the growth rate of rice production has been recognised, its yield in Thailand has generally been rather low. Compared with some selected Asian rice-growing countries, the yield of rice in Thailand was the lowest in 2001 (Ministry of Agriculture and Cooperatives 2002). Third, the Thai government has significantly influenced Thai agriculture through a variety of policies over the past three decades. The most important policies in the agricultural economy were export taxes on agricultural products, especially rice, and quotas and tariffs on machinery and fertiliser imports. They could cause imperfect competition in those inputs and in output markets. Finally, Thailand has experienced little agricultural research. Siamwalla, Setboonsarng and Patamasiriwat (1991) indicated that this may possibly be due to an over-abundant supply of agricultural products in Thailand. This suggests that there may have been little impetus for the government to invest in agricultural research to attain food self-sufficiency, unlike in India and Indonesia. Because of the above factors, economists and policy makers have raised the question of the technical efficiency of rice production in Thailand, especially at farm level. The main purpose of this study is to measure and investigate factors affecting technical efficiency (decomposed into its pure technical and scale components) of rice production at farm level in Thailand. To estimate efficiency scores, the DEA method is applied to farm-level cross-sectional survey data of rice farms in three provinces of the Northeastern Region in Thailand. Previous studies have investigated technical efficiency and its components at both the farm and aggregate levels in Thai agriculture (e.g., Chayaputi 1993, Krasachat 2000, 2001). However, this study, to my knowledge, has been the first application of DEA in order to measure and explain technical efficiency and its components of rice production at the farm level in Thai agriculture. This enables more detailed understanding of the nature of technical efficiency in rice farms in Thailand.  This paper is organised into five sections. Following this introduction, the analytical framework is described. Next, data and their sources are described. The last two sections cover the empirical findings of this study, and conclusions and suggestions for further research.  Coelli (1995), among many others, indicated that the DEA approach has two main advantages in estimating efficiency scores. First, it does not require the assumption of a functional form to specify the relationship between inputs and outputs. This implies that one can avoid unnecessary restrictions about functional form that can affect the analysis and distort efficiency measures, as mentioned in Fraser and Cordina (1999). Second, it does not require the distributional assumption of the inefficiency term.

 

The Moderating Effects of Consumer Perception to the Impacts of Country-of-Design on Perceived Quality

Ting-Yu Chueh, National Central University, Taiwan

Danny T. Kao, National Central University, Taiwan

 

ABSTRACT

This research is exploratory in nature.  The object involved in this research is mobile phone due to its prevalent usage. The main purpose of this research is to identify the moderators affecting the COD effects on perceived quality (PQ). According to related literatures, the moderators affecting the perceived quality are country image, value perception, risk, trust, attitude toward the brand, satisfaction, familiarity, attachment and involvement. The moderators involved in this research were emanated from consumer perceptions, rather than the behavioral aspects. Based upon the literatures and inference, all related moderators will significantly impact on PQ. We suggest that representative samples should be selected and tested in the future research.  The propositions involved in this research might be applicable to other products. Moreover, findings of this research can be applied to practical fields to facilitate the marketing resource distribution.  The trend of globalization since 1980s has brought unprecedented impacts on the marketing of consumer goods, thus international marketing research takes on greater importance.  Among the themes of international marketing, country-of-origin (COO) effect has become a critical issue in consumer decision-making, due to the fact that the consumer perception and judgment on the identical products may vary depending upon which country the products are made in.  As the face meaning expressed by the term, “perceived quality” is a subjective judgment.  Even sometimes, it cannot be understood by the scientific ways.  However, it will definitely affect the consumer buying behavior.  Therefore, perceived quality has been a popular research topic for the past decades and many studies have contributed to this area.  The importance of perceived quality is further enhanced by Aaker (1991), who categorized perceived quality as one of the key sub-dimensions of brand equity.  Though the critical roles played both in academic and practical fields, COO and perceived quality have never yet received collectively the opportunities of cross-reference they deserved.  Therefore, we thereafter present a conceptual framework and some propositions to interpret their relationship.  The purpose of this research is to identify whether there exists the relationship between COD and perceived quality, as well as whether any moderator to affect the COD effects on perceived quality from the perspectives of consumers.  In addition to the country that manufactures the product, the country that designs the product will also affect the consumer judgments toward the product or the brand. A number of countries have been noted for their excellence on designing some specific product categories and therefore enjoy positive images.  Some examples are as follows: Porsche (sport cars) ----- Italy;   Swatch (wristwatch) ----- Switzerland; Levi’s (jeans) ----- U.S.A.:  Benz (luxury cars) ----- Germany:  Chanel (perfume) ----- France:  Sony (home appliances) ----- Japan.  Keller (1998) thinks that, “choosing brands with strong national ties may reflect a deliberate decision to maximize product utility and communicate self-image based on what consumers believe about products from those countries.”  Accordingly, the COD effects will obviously lay influence upon consumer perception toward product quality.  However, some factors in-between act as the moderating roles to increase or decrease the impact magnitude.  Cobb-Walgren, Ruble and Donthu (1995) have divided the consumer buying decision into two processes: consumer perception and consumer behavior.  In this research, we will focus on the dimension of consumer perception.  Much research has been devoted to this emerging field of COO.  One of the pioneering articles regarding the country-of-origin was addressed by Schooler (1965).  As to the earlier studies, however, the most frequently criticized defects in COO studies are they involved only single cues of product quality rating, which may result in misleading conclusion (Johansson, Douglas and Nonaka, 1985) and ignored relative importance of other relevant cues in affecting consumer evaluation of products (Han and Terpstra, 1988).  Hence, those studies are insufficient to understand the overall impact of “made-in” effects (Hong and Wyer ,1989; Howard, 1989).  On the contrary, later studies utilizing the multiple cues indicated that COO has less impact on consumer’s perception (Ettenson, Gaeth and Wagner, 1988), which is inconsistent with the conclusion inferred by the single cues.  In addition, Pisharodi and Parameswaran (1992) have addressed some weaknesses existed in previous COO studies, such as poor handling of a complex construct, single cue studies, and lack of methodological rigor.  As we mentioned above, COO construct can be decomposed into four components: country-of-design (COD), country-of-parts (COP), country-of-assembly (COA) (Insch and McBride, 1998), and country-of-manufacture (COM).  In this research, we will only choose country-of-design as the independent variable, while the perceived quality acts as the dependent variable.  The reason why we determined to examine the COD effects mainly lies in the insufficient substantial research, compared with plenty of academic articles on COM.  According to the constructs defined by Mohamad et al (2000) and Li et al (2000), COD can be operationalized into five scales: appearance, style, color, variety, and aesthetics.  Among them, “color” is modified as “color assortment” to more fit our research.

 

Ownership and International Joint Ventures’ Level of Expatriate Managers

Lifeng Geng, Lakehead University, Ontario, Canada

 

ABSTRACT

We examine the relationship between ownership and the level of expatriate managers in international joint ventures. We propose that such relationship rests with the division between de jure control and de facto control. Results from empirical tests support our hypothesized inverted-U relationship between ownership and IJVs’ level of expatriate managers. IJVs’ level of expatriate managers rises as ownership increases from low to a majority level and then decreases after ownership exceeds the majority level. Since the 1970s, foreign subsidiaries that emerged from foreign direct investment (FDI) activities have played an increasingly important role in world economic development. By 2001, global FDI activities have created 850,000 foreign subsidiaries. These foreign subsidiaries employed approximately 54 million people across the globe. Their sales of almost $19 trillion were more than twice as high as world exports. They now account for more than one-tenth of world GDP and one-third of world exports (UNCTAD, 2002). One problem investing firms frequently confront is how to control these foreign subsidiaries. In fact, control has been the focus of the FDI literature and is the single most important determinant of both risk and return (Anderson & Gatignon, 1986). Control is an essential component of managerial functions responsible for ensuring that investing firms’ goals and interests are met and deviations from standards are corrected for effective performance outcomes (Fenwick et al., 1999).  Control problems exist with all foreign subsidiaries, but most notably in international joint ventures (IJVs), which represent a governance mode of international transactions located between the polar opposites of arms-length deal and those conducted within firms (Hennart, 1988). IJV control seems indispensable for a successful, cooperative IJV relationship. Cooperation is a necessary complement that overcomes the limit of IJV control and nourishes continuity and flexibility when changes and conflicts arise (Luo, 2002), IJV control establishes the institutional framework and organizational setting that guide, nurture, and strengthen the course of cooperation.  Ownership and the use of expatriate managers are two important dimensions of IJV control. Ownership is a type of de jure control that grants investing firms the legitimate right to exert de facto control and to participate in IJVs’ strategic and operative decisions. The use of expatriate managers provides investing firms an effective mechanism to exercise de facto control over IJVs. When investing firms enter a foreign market in the form of IJVs, they may need to staff IJVs’ top management positions with a certain level of expatriate managers in order to exercise effective control over IJVs. Thus, a study of the relationship between ownership and IJVs’ level of expatriate managers may contribute to a better understanding of IJV control.  However, in contrast to a plethora of research on IJV control and an expansive literature dealing with the selection, training, compensation, and cross-cultural adjustment of expatriate managers, there are surprisingly few studies investigating the relationship between ownership and IJVs’ level of expatriate managers. Indeed, a review of the extant literature reveals a lack of sound theoretical foundation and scant empirical evidences. The nature and strength of the relationship between ownership and IJVs’ level of expatriate manager have yet to be established and tested. This study addresses this issue by introducing a theoretical framework based on the concepts of de jure control and de facto control. Our analysis of 235 IJVs in Japan provides support for this framework.  IJVs are a form of international inter-firm alliances created to govern cooperative efforts in the pursuit of partner firms’ complementary assets. These assets are typically knowledge-based proprietary assets, or intangible assets. They invariably have a tacit nature in that the creation and replication of these assets rely heavily on learning by doing (Polanyi, 1967). They reside in people who operate them and are difficult to articulate. The tacit nature renders the exchange of these assets problematic because of contractual hazards, particularly those associated with asset specificity, resource interdependency, difficult performance measurement, and uncertainty (Williamson, 1996: 9). It is especially difficult in the international arena since countries differ significantly in their legal systems that protect intellectual property (Mansfield, 2000). Consequently, investing firms have to align control structures with characteristics of transactions in order to mitigate contractual hazards (Williamson, 1996: 3). IJV control enables investing firms to participate in IJVs’ decision-making and to access information flows within IJVs. Through these mechanisms, IJV control facilitates superior monitoring of IJVs’ activities, attenuates the leeway for opportunism, prohibits contractual hazards, and protects investing firms’ intangible assets (Oxley, 1997).  Many argue, however, that IJVs are a type of voluntary cooperative relationship and cooperation is “theoretically deemed central to the IJV relationship” (Parkhe, 1993; Madhok, 1995). Cooperation is a precondition for the formation of IJVs. The ultimate purpose behind the formation of IJVs is to provide partner firms an opportunity to work together in pursuit of mutual benefits. No firm is willing to work with a firm who has the tendency to cheat and to breach an agreement when it can benefit from doing so. Cooperation is a necessary complement that overcomes the limit of control and nourishes continuity and flexibility when changes and conflicts arise (Luo, 2002).

 

Teaching Workloads of Finance Program Leaders and Faculty and Criteria for Granting Load Relief

Dr. Ron Colley, University of West Georgia, Carrollton, GA

Dr. Ara Volkan, University of West Georgia, Carrollton, GA

 

ABSTRACT

This study first examines the distributions for teaching loads and number of course preparations of finance program leaders and faculty at Ph.D.-level and masters-level AACSBI-accredited institutions.  In addition, official maximum loads and the extent to which faculty and program leaders in given institutions teach different levels of loads are presented.  Second, the reasons for granting teaching workload reductions to finance faculty are examined.  Where appropriate, statistical tests are performed to report differences among the means of the results observed in the two categories, for both faculty and program leaders and public and private institutions. The overall results show that there are no statistical differences between the responses of program leaders and faculty and between the private and public institution outcomes.  While the former indicates good communications among finance educators, the latter shows that free market competition operates as an equalizing force.  When specified, the official maximum load is usually 24 semester hours (eight courses) per year, but few faculty teach the maximum load. Also, there are significant differences between the average common teaching loads and common number of course preparations in the two categories.  While there are some differences among the reasons cited for load relief in the two categories of programs analyzed, overall results indicate that publication activities are the main factors underlying load relief, followed by editing a journal and institutional service.  Thus, advocates of rewarding the scholarship of teaching and professional development activities at levels at least equal to research activities have not achieved their goal. Given increasing pressures for doing more with less and calls from some legislatures to mandate minimum teaching loads, finance program leaders and faculty need to be informed how the level of their teaching workloads compares to those at institutions with characteristics similar to theirs. The nation-wide results reported in this paper can be used for such comparisons, as evidence during discussions with university administrators, and when filling or changing jobs.  Traditionally, teaching workloads in institutions of higher education are determined based upon types of degrees offered, accreditation status from the Association to Advance Collegiate Schools of Business - International (AACSBI), and governance characteristics (i.e., public or private).  Since public institutions are funded by state legislatures and usually operate under authoritative bodies such as boards of regents and chancellors, they have to follow guidelines for teaching activities that originate from these sources.  Consequently, it is usual to find an official (i.e., maximum) teaching load specified in most states.  However, there are few faculty members who teach the official or maximum load.  Given expectations of research and professional service, most finance faculty members teach less than the specified maximum course load, resulting in the establishment of an average (i.e., common) teaching load that is less than the specified maximum.  Thus, in most finance units there are faculty members who may teach the maximum, common, or above or below the common load due to above or below average expectations in research and service.  The two primary objectives of this research are to determine: 1) common teaching loads in semester hours, the common number of course preparations, maximum teaching loads, and the extent to which faculty and program leaders in given finance programs teach different levels of loads; and 2) the reasons why faculty members teach different loads.  One motivation for this study is to provide finance faculty and program leaders with empirical evidence that can be used to support their requests and decisions concerning teaching workloads and load relief when they negotiate with administrators, change jobs, or fill positions.  Given increasing pressures for doing more with less and calls from some legislatures to mandate minimum teaching loads, finance faculty and program leaders need to have more than anecdotal evidence at their disposal and be informed how the level of their teaching workloads compares to those at institutions with characteristics similar to theirs.  The nation-wide results reported in this paper can be used for such comparisons during discussions with administrators and when recruiting or interviewing for jobs.  Another motivation for this study is to determine whether the advocates of rewarding the scholarship of teaching and professional development and service activities at levels at least equal to research activities have achieved their goal.  Specific emphasis has been placed on revisions leading to rewarding the scholarships of teaching (i.e., innovative teaching, course and curriculum design, and education research) and professional development and service outcomes at levels at least equal to the publication of research outcomes.  The results of this study will show if the efforts to change the publication driven faculty reward structures are having any impact.  Results of recent research studies on this subject indicate that university reward structures continue to be based mainly on publication activities. 

 

ICT Adoption and SME Growth in New Zealand

Dr. Stuart Locke, The University of Waikato, Hamilton, New Zealand

 

The impact of adopting information communication technology (ICT) upon the growth of small businesses in New Zealand is reported in this paper.  Government policy increasingly places a strong emphasis upon the importance of ICT and the knowledge economy.  Specific projects, undertaken by government in this regard, have included remote rural broadband access developments, the single government portal and e-procurement initiatives.  A SME Quarterly Benchmarking Survey has been used to investigate various aspects of ICT adoption by SMEs since 1999.  This survey provided the research instrument for the present study.  Three measures of growth are used as the basis for investigating the impact that the level of ICT adoption has on a range of small businesses.  The analysis, using logit regression, probes the strength of the various factor relationships.  Increased profitability, as a proxy for growth, is most strongly correlated with ICT usage.  Although a positive relationship is considered likely, and detectable in the data gathered, it could not be separated from other influences.  Factors such as the level of ICT understanding of the owner/managers and increases in the number of employees are similarly positively correlated.  Accordingly, it becomes increasingly difficult to sustain a causality linkage between increased ICT use and growth. The extent to which SMEs have clear growth objectives, in terms of specific business performance measures, appears to be very limited.  Finally, it is suggested that the results obtained in the study provide useful insights which have a direct bearing upon government policy being developed in agencies such as the Ministry of Economic Development.  New Zealand Government policy towards SMEs has, over the last three years, placed an increasing emphasis upon the knowledge economy and information communication technology (ICT) as key drivers for sustainable growth in this sector.  These emerging policies and associated programmes are indicative of a new approach when compared to the prior decade of significant instability of policy towards SMEs (Locke and Scrimgeour 2000).  While the core government agencies continue to be renamed, merged, unmerged and shuffled there, nevertheless, appears to be a more consistent theme emerging.  The research findings reported in this paper concern the relationship between ICT and growth in small business.  In practical terms the policy implications of this study are to establish whether it is plausible to encourage small business growth through the implementation and utilisation of ICT.  In New Zealand a very large percentage of business is small.  Approximately 45 of the employed workforce is through small businesses with less than 20 employees, and 97% of private sector enterprises are small businesses (Ministry of Economic Development, 2001).  An improvement in business performance is most likely to be experienced predominantly by those enterprises that have specific a growth objective (Westhead & Cowling, 1995).  Owner/managers with the objective of independence generally experience significantly lower profits with no influence on sales levels, whereas those wanting to achieve profitability do not necessarily experience an increase in profit levels (Westhead & Cowling, 1995). This suggests that to achieve a high performing small business sector, it is vital to encourage and support small business owner/managers to adopt a growth objective (Westall & Cowling, 1999). However, once such a goal is established, the specific factors that allow enterprises to achieve their growth objective are unclear (SEAANZ & CPA Australia, 2001).  Such uncertainty has prompted government and private sector initiatives examining the factors having a potential influence on business performance and growth.  In 2002 the Ministry of Economic Development published a report detailing the results of their Business Practices and Performance Survey (undertaken in the previous year), which identifies various factors that appear to be crucial for growth within a sample of 3,378 NZ businesses.  In particular, those that have comparably better practices in place and that have most effectively linked them to operational outcomes are found most likely to report a higher return on investment and growth in productivity, net cash flow, profitability and market share.  Lindsay et al (2001) finds that within a sample of 170 small NZ businesses, more than half have experienced growth.  With New Zealanders being enthusiastic early adopters of new technologies (Statistics NZ, 2002; Trade NZ, 2000), in 2001 NZ's expenditure on ICT as a percentage of GDP was the highest in the world at 10.2 percent (Barton, 2002).  This top ranking was maintained throughout the 1990s, for instance with ICT spending making up 9 percent of GDP in 1992 and 8.7 percent in 1997 compared to 7.1 and 7.7 percent respectively in the USA (WITS A/IDC, 1998).  Given the relatively high use being made of ICT within NZ, it is likely that any relationship between ICT and business growth will be particularly pronounced within the NZ economy. 

 

Internet Technology as a Tool in Customer Relationship Management

Noor Raihan Ab Hamid, Multimedia University, Cyberjaya, Malaysia

Dr. Norizan Kassim, United Arab Emirates University, UAE

 

ABSTRACT

The Internet’s growth, particularly the World Wide Web, as an electronic medium of commerce has brought tremendous changes in market competition among various industries. For example, past researches have examined the impact of Internet technology on customer relationship management in various areasľin small and large firms, services and business-to-business companies. However, there remains a need to empirically examine the impact of implementing Internet technology on various dimensions of relationship management in South East Asia, particularly Malaysia. Primarily, the interest was led by traditional customer management economicsľit costs the industry five times as much to acquire a new customer than to retain an existing one (Peppers and Rogers 1996). The results of this study indicate that click-and-mortar companies show a higher percentage of using the Internet technology for CRM compared to pure dotcom companies.  There is a positive impact on the utilization of Internet technology on the CRM variables being studied. These findings may reflect a similar situation in other countries in South East Asia where the penetration rate of e-commerce is relatively low. The limitations and future directions of  this research are also discussed and highlighted.  The emergence of Internet technology, particularly the World Wide Web, as an electronic medium of commerce has brought tremendous changes in how companies compete.  Companies that do not take advantage of the Internet technology is viewed as not delivering value added services to their customers, thus are at a competitive disadvantage. Obviously, the Internet technologies provided companies with tools to adapt to changing customer needs, and could be used for economic, strategic and competitive advantage. In contrast, companies that utilize the technology (at least having a web site that displays corporate and products information), are viewed as progressive and continuously striving to meet the current needs of customers. In turn, these companies have a low cost base and have begun producing competitive high quality products. This general industry trend has created tremendous cost pressures on traditional businesses. By far, both companies and consumers have acknowledged the Internet as an effective tool for disseminating information.  From a marketing perspective the Internet is not just another marketing tool, but a tool that can reach far to help companies understand customers better, to provide personalized services and to retain customers. Hence, the Internet technology is imperative in managing customer relationship for e-businesses.  In order to understand the roles of the Internet in managing customer relationship, other researchers have approached this issue by examining company usage of the Internet in customer services and online communities (Adam et al., 2002; Ng et al., 1998; Poon and Swatman 1999).  Boyle (2001) further reported the effect of adopting the Internet technology as a substitute of a traditional channel on buyers-sellers relationship has increased. In addition, Bradshaw and Brash (2001) found companies become more efficient in managing relationship with the used of the Internet technology. Of course, the concurrent trend driving industry change is rising customer expectations; this meant companies has to refine their ability and serve their “best” customers – and create loyal customers. As a result, previously ad hoc and fragmented techniques for dealing effectively with customers were giving way to a more methodical CRM approach: identifying, attracting and retaining the most valuable customers in order to sustain profitable growth. However, there remains a need to empirically examine the impact of implementing the Internet technology on various dimensions of relationship managementľunderstanding customer behavior, delivering personalized services and acquiring customer loyalty. The purpose of this paper is to investigate the impact of using the Internet technology on customer relationship management.  Implicit within this objective is to investigate the purpose of using the Internet technology in pure dotcom and click-and-mortar companies.  Past researches have examined the Internet technology usage towards relationship management in small and large firms (Dutta and Segev 1999; Hamill and Gregory 1998; McGowan et al., 2001), in services and business-to-business firms (Berthon et al., 1999; Klein and Quelch 1997), and geographic regions (Adam et al, 2002; Arnott and Bridgewater, 2002; Dutta and Segev 1999).  But none of these studies further examined the impact of the Internet technology usage in pure dotcom and click-and-mortar companies in relation to CRM. In order to achieve the objective, different types of the Internet technologiesľweb sites, emails, web form, and chat roomsľwere tested against their usage to obtain critical information.

 

Privatization in Saudi Arabia: Is it Time to Introduce it into the Public Sector Domain?

Dr. Abdullah M. Al-Homeadan, The Institute of Public Administration (IPA), Riyadh, Saudi Arabia

 

INTRODUCTION

Reducing government inefficiency and ineffectiveness was essential for the improvement of economic welfare of the human race. “Governments were forced to look at ways to reduce costs and inefficiencies, and turned to privatizing some of their...commercially oriented operations”(Ives, 1995, p. 2).  One of the countries that is seriously considering privatization is Saudi Arabia.  In fact, Saudi Arabia has already taken some big steps toward privatizing all of the enterprises that could be run by the private sector. The Saudi privatization program was announced by King Fahd on Monday the fifth of May, 1994 and was delineated in the government’s Third Basic Strategic Principle of the Sixth Development Plan (1995-2000). The program has been interpreted to mean “giving the private sector the opportunity to undertake many of the economic tasks of the government, while ensuring that the government does not engage in any economic activity that can be undertaken by the private sector” (p. 17).  The objective of this study is to explore the attitudes of the department heads in the public sector of Saudi Arabia in order to identify the factors that have shaped those attitudes. The importance of the opinions of those administrators stems from the fact that they are going to be responsible for implementing the King’s privatization initiative. Insuring that policy implementors hold favorable attitudes towars privatization is essential to the success of the privatization program. Furthermore, favorable attitudes are a strong indication that reform through privatization is a very timely policy.  Before discussing these factors, it is appropriate to define the term privatization. Privatization refers to any policy that is “designed to alter the balance between the public and the private sectors” (Cook and Kirkpatrick, 1988, p. 3) to the benefit of the later.  In their book The Political Economy of Public Sector Reform and Privatization, Suleiman and Waterbury (1990), stated that motivations for privatization differ from one society to another. They add that the move towards privatization cannot be determined by a single factor. The authors offer seven factors that they say play a significant role in the government’s decision to sell some of its assets to the private sector. They are: (1) The growing size of the public sector...[which]... is judged to have reached an excessive level that leads only to inefficiency.  (2)Privatized companies will be better managed and better financed through the capital markets than through the states’ budget.   (3)Privatization contributes to the development of financial markets and hence can finance new and growing enterprises. It leads to increased availability of funds to the industry.   (4)Privatization leads to a substantial increase in the state’s revenue from the sale of equity.   (5)Increase in the state’s revenue can lead to the lowering of taxes and to the use of available funds for specific political purposes.  (6)Privatization can promote broad-based sharing-holding in society and so be a bulwalk against social disorder.  (7)The ‘participatory capitalist system’ may help to detach workers from trade unions; and weakened trade union movement may help dampen demand, increase investment, and facilitate adjustment” (p. 4-5).  The World Bank, in its 1988 report, attributes a shift towards privatization to the belief that the private sector is more efficient and effective than the public sector in the delivery and/or production of goods and services. Winward (1989) supports the findings of the World Bank by citing efficiency as the British government’s motive for privatizing the country’s public utilities. Winward adds that the informed investment decisions of the private sector, according to the author, help private firms survive competition in the marketplace while providing the consumer with lower prices. Calway (1987) argues that privatization might help strengthen the government’s spending power and make up for tax cuts, creating more jobs, and providing a better environment for investment. In addition, Calway states that some governments resort to privatization because they expect it to improve their competitiveness.  Halachmi (1989) adds to these factors, people’s negative attitudes toward bureaucracies. More (1992) points to the fact that governments believe that privatization improves performance, broadens ownership, strengthens the market, and gives government more time to focus on other roles in society.  In lesser developed countries (LDCs), Hemming and Mansoor (1988) state that the motives for considering the privatization option are many. Most important are: (1) efficiency; (2) a more competitive economic environment; (3) reduction of budget deficits; (4) liberalization of local economies; and, (5) attracting foreign investment and business. To these factors, Cowan (1990), adds the following:  “[1] A growing awareness of the deficits created by mounting subsidy costs.  [2] The government may view privatization primarily as a source of additional revenue for the Treasury from the sale of state- owned assets.  [3] Some governments see profits from sales as a potential way to avoid, or at least reduce, a rise in tax rates.  [4] The interest in privatization may reflect a genuine concern on the part of government to reduce the public sector or just a reluctant response to pressures exerted by external agencies.  [5] General dissatisfaction with the performance of SOEs” (p.10).  According to Killick (1986), privatization was widely considered by developing countries as a result of their disappointment with their central planning systems. Killick adds that the poor performance of such planning entities was the driving force behind the constant attempts by the International Monetary Fund (IMF), the World Bank and the United States Agency for International Development (USAID), to convince developing countries to rely on the private sector to the extent practical in their quest to promote efficiency and economic growth. The private sector has “more efficient means of allocating resources than planning ministries” (p. 101). “Privatization is...an important step in the direction of...restoring acceptable rates of growth” (El-Naggar, 1989, p. 3).

 

Trainer Development Formula: The Case of the Institute of Public Administration in Saudi Arabia

Dr. Abdullah M. Al-Homeadan, The Institute of Public Administration (IPA), Riyadh, Saudi Arabia

 

INTRODUCTION

Government in Saudi Arabia has always been the main financier of administrative development in the country. Ever since the creation of the Council of Ministers in 1953, the concept of effective and efficient public sector has been one of the government’s main strategies. The Council of Ministers stems from the fact that it practices both legislative and administrative powers. One of the major steps taken by the Council towards improving the effectiveness and efficiency of the struggling ministries and public agencies was the invitation of experts from a number of concerned international agencies as of the year 1957. These agencies are the International Monetary Fund, the International Bank for Reconstruction and Development, The United Nations Technical Cooperation Committee, and the Ford Foundation. One of the major recommendations of these agencies was the establishment of the Institute of Public Administration (IPA) in 1961. The over all mission of the IPA has been the advancement of administrative developments. However, accomplishing this mission could not be achieved without providing a sound administrative environment for the people who will carry on this mission.  In this paper, light will be shed on the IPA’s continuous effort to promote the efficiency of government civil servants and prepare them academically and practically to carry out their responsibilities, to use their authorities in a way that ensures a high level of administrative professionalism and to support the bases for developing the national economy. In addition, light will be shed on the IPA’s efforts in the development of the private sector as well. The implications of this new development on the trainers will be also discussed. Finally, The role played by the private sector in the development of the skills of trainer will be pointed out towards the end of this paper.   The IPA was established by Royal Decree No. (93), dated 24/10/1380H (1961A.D.) as an autonomous corporate body with a headquarters in Riyadh. It was necessary, due to expansion in training, research, and consultation needs, to establish three branches. The IPA branch in Dammam started functioning in 1973; the Jeddah branch 1974; and a third branch for women in Riyadh in 1983.  The IPA was established to promote the efficiency of government civil servants and prepare them academically and practically to carry out their responsibilities, to use their authorities in a way to ensure a high level of administrative professionalism and to support the bases for developing the national economy. The IPA also participates in administrative reorganization of government agencies and offers advice on administrative problems presented to it by the ministries and public organizations. In addition, it conducts research projects related to administration and cements cultural relationships in the field of public administration through the following:  1. Developing and performing instructional training programs for various echelons of employees,  2. Conducting scientific administrative research and studies, directing and supervising over them at the Institute and in collaboration with key officials in the ministries, government organizations, and their branches wherever field research is being carried out,  3. Collecting, tabulating, and classifying the administrative documents in the Kingdom,  4. Holding conferences on administrative development for top management levels of government personnel,  5. Hosting Arab, regional, and international conferences on matters related to public administration in the Kingdom, and participating in similar conferences abroad.  6. Publishing research and administrative data and exchanging them with relevant organizations in the Kingdom, the Arab world, and other countries, 7. Encouraging scientific research in administrative affairs and allocating study grants and royalties for this, and 8. Offering the IPA staff academic and training scholarships in administrative affairs in order to promote their administrative efficiency. Articles 4 and 5 of the Institute’s statute made the IPA Board of Directors the governing authority for the conduct of its business and affairs. It has been given all authority necessary to achieve its objectives. The Board also has the authority to make all the by-laws and issue instructions needed for the smooth running of the Institute. The Board of Directors consists of the following:  1. The Minister of Civil Service, Chairman,  2. The Director General of the Institute of Public Administration, Vice-Chairman,  3. The Deputy Minister of Civil Service, Member,  4. The Director General of the General Organization for Technical Education and Vocational Training, Member,  5. The Undersecretary of the Ministry of Higher Education for Education Affairs, Member,  6. The Assistant Undersecretary of the General Presidency for Girls Education for Planning and Development, Member,  7. The General Director for Organization and Administration of the Ministry of Finance and National Economy, Member, and  8. The General Director of Planning and Development: Ministry of Education, Member.   The IPA was established to promote the efficiency of government civil servants and prepare them academically and practically to carry out their responsibilities, to use their authorities in a way that ensures a high level of administrative professionalism and to support the bases for developing the national economy. The IPA also participates in administrative reorganization of government agencies and offers advice on administrative problems presented to it by the ministries and public organizations. In addition, it conducts research projects related to administration and cements cultural relationships in the field of public administration through the following:  1. Developing and performing instructional training programs for various echelons of employees, 2. Conducting scientific administrative research and studies, directing and supervising over them at the Institute and in collaboration with key officials in the ministries, government organizations, and their branches wherever field research is being carried out,  3. Collecting, tabulating, and classifying the administrative documents in the Kingdom.

 

The Relationship between the Dow Jones Eurostoxx50 Index and Firm Level Volatility

Dr. Kevin Daly, University of Western Sydney, Campbelltown, Australia

 

ABSTRACT

This paper presents a study of asset price volatility, correlation trends and market risk-premia. Recent evidence (Campbell 2001) shows an increase in firm-level volatility and a decline of the correlation among stock returns in the US. We find that, in relation to the Euro-Area stock markets, both aggregate firm-level volatility and average stock market correlation are trended up-wards.  We estimate a linear model of the market risk-return relationship nested in an EGARCH(1,1)-M model for conditional second moments. We then show that traditional estimates of the conditional risk-return relationship, that use ex post excess returns as the conditioning information set, lead to joint tests of the theoretical model (usually the ICAPM) and of the Efficient Market Hypothesis in its strong form.  To overcome this problem we propose alternative measures of expected market risk based on implied volatility extracted from traded option prices and we discuss the conditions under which implied volatility depends solely on expected risk. We then regress market excess-returns on lagged market implied variance computed from implied market volatility to estimate the relationship between expected market excess-returns and expected market risk.We investigate whether, as predicted by the ICAPM, the expected market risk is the main factor in explaining the market risk premium and the latter is independent of aggregate idiosyncratic risk.  It is widely accepted that volatility is not stable over time. Both aggregate market volatility and single stock volatility generally exhibit time varying behaviour. Schwert (1989) points out, “large changes in the ex-ante volatility of market returns have significant effects on risk averse-investors. Moreover changes in the level of market volatility can have important effects on capital investment, consumption, and other business cycle variables”.  Most empirical studies of volatility have focussed on aggregate market volatility. Bollersev, Chou and Kroner (1992), Hentschel (1995), Ghysel, Harvey and Renault (1996), Campbell, Lo and MacKinaly (1997) give partial surveys of the enormous literature on these models(1). Recent literature, which examines the relationship between risk and return, has focused on the role played by total risk, including idiosyncratic risk, in explaining stock market returns. Goyal and Santa-Clara (2001) find that there is a significant positive relation between average stock variance and the return on the market.  Campbell, Lettau, Malkiel and Xu (2001), henceforth CLMX, have provided important theoretical work on variance decomposition and an extensive analysis of long-term trends for both the market and firm-level volatility for the US. CLMX (2001) presents evidence from three US stock markets (NYSE, NASDAQ, AMEX) that show average correlation among stock returns to have declined over the last two decades. Furthermore this decline of US stock market correlations has been accompanied by a parallel increase in average firm-level volatility(2) whilst market volatility has not shown any significant increase in trend. This study focuses on the relationship between stock market and firm-level volatility in the Dow Jones Eurostoxx50 Index (the leading stock market in the Euro-Area). To analyse this relationship we require a decomposition of the variance of the Dow Jones Eurostoxx50 Index. This involves splitting average total variance of the returns on the Eurostoxx50 Index into market variance and aggregate firm-level variance. This in turn enables us to study, model and test the relationship between risk and return with regard to a portfolio represented by the stocks included in the Dow Jones Eurostoxx50 Index.  Variance decomposition will be carried out in a manner similar to the methodology used by Campbell, Lettau, Malkiel and Xu (2001). The weighed average variance of excess returns on all the stocks in the Eurostoxx50 Index will then be decomposed into market variance and average firm-level variance. We also perform various tests on the relationship between risk and return in particular, we test the International Capital Asset Pricing Model (ICAPM) claim that a positive relationship exists between the market risk-premium and the expected market variance. Here, an EGARCH-M model will be used to estimate the conditional first and second moments of the market excess-returns. We will then use implied market variance, computed from the volatility rate implied by traded prices of options on the Dow Jones Eurostoxx50 Index, as an explanatory variable of the market risk-premium. Market excess-returns will be finally regressed against conditional(3) average firm-level volatility to test the CAPM and ICAPM claim that market volatility is the only priced risk factor and that no relationship exists between market risk-premium and idiosyncratic risk. 

 

Strategic Orientation of Banking and Finance Managers in United Arab Emirates

Dr. Quhafah Mahasneh, University of Sharjah, United Arab Emirates

 

ABSTRACT

In the emerging economic scenario in the UAE characterized inter alia by intense competition, strategic orientation of banking and finance firms is viewed as a critical roadmap to competitive edge and superior performance. Policymakers have taken ambitious steps to transform the UAE economy into a regional hub of banking and finance. Missing, however, has been an empirical study focusing on the strategic orientation of managers in the banking and finance sector of the UAE economy. Therefore, an attempt is made in this study empirically to investigate the strategic orientation of managers in this important sector of the economy. It is hoped, a study of this kind would have great implications for policy not only in the UAE but also elsewhere in the world.  A compelling body of literature has emerged in recent decades focusing on various aspects of strategy (Porter 1980, D’Aveni 1994, Barney 1991, Brandeberger and Nalebuff 1995, Hambrick and Fredrickson, 2001). Consultants and academics have contributed immensely in terms of strategy designs and their effectiveness under various environmental conditions. However, despite the proliferation of some very promising studies in the literature, there is a need to replicate the studies focusing on the orientation of organisations to compare empirical findings and contribute to the advances in theory (Hubbard and Armstrong, 1994). Interestingly, most of the research in the area of strategic orientation has been undertaken in industrial countries whose economies have attained maturity and stability in relative terms. It is assumed that firms in these countries are also able to design appropriate responses to competitive onslaughts and stay healthy. But is this assumption applicable in a developing economy dominated by inward-looking product-oriented firms? (Kinsey, 1988) It would certainly be analytically interesting to find it out. Therefore, a key objective of this study is to investigate how far have the managers in the banking and finance sector of a developing economy such as the UAE have gone in adopting strategic orientation to outflank competitors and help the sector become a service hub in the Middle East.  In the process of developing its financial sector, the United Arab Emirates has taken a new step as Dubai the commercial city announced its plans to establish an international financial center (DIFC). The market is expected to offer a stock exchange, asset management, reinsurance market, Islamic finance, and other finance services that are expected to reduce the capital outflows to foreign international market. The DIFM is expected to face competition from Bahrain financial market, which also is supposed to provide similar financial services. Undoubtedly, Bahrain has been in this business for many years and at some point in time in the past was considered the only business and financial hub in the area. However, the extent to which the banking sector in the UAE is prepared to meet the challenges of this new financial center, would demonstrate the ability of this center to compete with Bahrain financial center.  A Cornerstones of Bahrain’s strength has been the implementation of rigorous regulatory regime that conforms to the highest international standards and the Bahrain Monetary Agency has spared no effort in providing the necessary support and infrastructure to facilitate the growth of industry, it was stressed (see Bankers Digest 2003).  Backed by the country’s oil richness and its position as the major regional business hub, the United Arab Emirates is striving to overcome competition and skepticism and establish itself as a force in global banking. The grandeur of the project and the services demonstrate the countries’ ambitions to modernize and develop its economy. At this point, the question that we wish to raise is whether the countries’ banking system is well prepared to handle such major expansion in financial services and competition?  Despite declining interest rates and decreasing margins in corporate business, UAE banks put up a record performance in year 2002. According to the S&P report on this sector, it has expressed concerns over the high number of banks as well as the easy banking condition available. The UAE banking sector has seriously implemented steps that aim at modernizing financial service although, those steps are far from making this sector competitive with much more advanced developed foreign banks. Most national banks are facing competition from foreign subsidiaries and other foreign investment banks. Taking advantage of cheap and imported technology, products, and financial support, most foreign banks were able to realize exceptional growth in income and assets in recent years (see Mahasneh 2001). On the other hand, taking advantage of tax-free business, national banks were able to achieve large profits over the past few years.  The United Arab Emirates’ World Trade Organization commitment to open its banking sector to foreign competition will result in mounting pressure on local banks to adjust their activities in order to meet tough competition. Undoubtedly, consumers will benefit from this openness as prices 0f credit and other banking services must decline. According to Yip’s, there are four major reasons for globalization (market, cost, government, and competition) which could be analyzed in order to measure the degree of globalization in the banking industry.

 

Great Leaders Teach Exemplary Followership and Serve As Servant Leaders

Dr. Michael Ba Banutu-Gomez, Rowan University, Glassboro, NJ

 

ABSTRACT

This paper focuses on the impact of exemplary follower and servant leader. Thus it examined their relationship and the roles they play in the creation of the “Learning Organization” of the future. The first part of this framework addressed the process of a good follower. This process include leaders alienating followers, leaders face problems in teaching leadership, skills of exemplary followers, exemplary followers and team, organizations of the future, leaders transforming people and leaders measured by the quality of their followers. The second part of the paper deals with servant leader. Thus its process include servant leaders elicit trust in followers, modern western societies, community provide love for humans, business organizations are expected to serve and modern organizations searching for new mission. Together, the two frameworks provide insights and guidelines for managers and leaders in leading organizations of the future.  To succeed, leaders must teach their followers not only how to lead: leadership, but more importantly, how to be a good follower: followership.  Contrary to popular negative ideas regarding what it means to be a follower, positive followership requires several important skills, such as, the ability to perform independent, critical thinking, give and receive constructive criticism and to be innovative and creative. Furthermore, we believe that Great Leader is a process that can be learned, that is not restricted to a few “chosen or special” individuals that are born with an unusual capability or skill. Though, some seem to have more to learn than others do, but the potential for exemplary follower seems to be universal. Through solicited comments and regular participation, employees shared ownership in determining policies at work (Gilbert & Ivancevich, 2000). Being a follower has a negative connotation because it is usually used to refer to someone who must constantly be told what to do. Regardless of work unit individualism/collectivism, supervisors were more likely to form trusting, high-commitment relationships with subordinates who were similar to them in personality (Schaubroek & Lam, 2002).  Most people think of a good follower as someone who can take direction without challenging their leader.  In contrast to this definition, exemplary followers take initiative without being prompted, assume ownership of problems, and participate actively in decision-making. Not only can creative contribution be valuable to a firm, but the ability to come up with unique yet appropriate ideas and solutions can be an important advantage for individuals as well (Perry-Smith & Shalley, 2003). They distinguish themselves from ordinary followers by being “self-starters” going above and beyond what people expect of them (Kelley, 1992).  All leaders have at least one follower who has become alienated in relation to authority.  This person usually thinks they are right and exhibits a hypercritical attitude toward authority figures.  Their hostile feelings toward leaders are often the result of unmet expectations and broken trust.  If these experiences turn us off, they shape our subsequent response to the culture. For example, too much certainty leads to complacency and not enough predictability can result in alienating workers. A leader’s actions, therefore, can create either alienated or committed workers (Fairholm & Fairholm, G, 2000). They may be people who were not recognized for their contributions in the past.  An outstanding advantage of recognition, including praise, as a motivator is that it is no cost or low cost, yet powerful. Recognition thus has an enormous return on investment in comparison to cash bonus (Dubrin, 2001). Leaders must first confront the hostility expressed by alienated followers in order to replace it with something more positive.  To address the complaints of an alienated follower, a leader must confront the perceived inequality and re-establish trust. If goals have diverged, an overarching goal, which both leader and followers accept must be found.  When this has been accomplished, leaders can continue to work with alienated followers to help them accept that setbacks are part of reaching any goal. Consequently, people understand each other, they share the same concepts, and they have the same vision (Deneire & Segalla, 2002). Leaders need to remind followers that if you belong to a community or organization, you have a responsibility to contribute to making it better for everyone, not just yourself.  This is why the leader is required to go beyond reminding followers and instead lead by example. In other words, leaders must help their followers relinquish a typically Western credo: “I am free to do whatever I want, so long as it does not harm anyone”, and substitute instead, “I am free to do whatever I want, so long as it benefits more than just myself”. Leaders must convince their alienated followers that they want to achieve more than just a mutually satisfactory resolution of past grievances, rather, a mutual acceptance, understanding and appreciation of a shared dream or goal (Kelley, 1992).

 

Factors that Affect the Selection of Defensive Marketing Strategies: Evidence from the Egyptian Banking Sector

Dr. Mansour S. M. Abdel-Maguid Lotayif, Cairo University, Cairo, Egypt

 
ABSTRACT

The current study aims at identifying the causality relationships between defensive marketing strategies {e.g. business intelligence strategy (BI), customers service strategy (CS), customer complaint management strategy (CCM), Aikido strategy (AIKO), Free telephone line strategy (FTL), focus strategy (FOC), differentiation strategy (DIFF), and cost leadership strategy (CL)} and four sets of variables. These sets are demographics (e.g. respondents’ positions, ages, educational levels, experiences, bank experiences, and bank’s number of employees), bank’s objectives (e.g. to increase the bank’s market share, to maintain the current market share, to increase the bank’s profit, to increase the bank’s customer satisfaction, and to increase the customer’s loyalty), bank’s rivals i.e. kinds of entry modes (e.g. branches, subsidiaries, joint venture, merger, direct exporting and indirect exporting), and rivals’ competitive advantages (e.g. their marketing mix variables, all their marketing program variables, offering of new kinds of banking services, high interest rates on deposits accounts, low interest rates for loans given, well designed service delivery system, employing competitive staff, and strong advertising campaigns). The experiences of 591 bankers were utilized to investigate these relationships. Throughout Canonical Correlation Analysis in Stat Graphic statistical package, strong and significant relationships between defensive marketing strategies and these four sets of variables were supported.   Defensive marketing is the body of knowledge that uses customers as a shield in their battle with their rival in a specific market (Griffin et al., 1995; Heskett et al., 1994; Reichheld, 1993; Fornell, 1992; Reichheld and Sasser, 1990; and Fornell and Wernerfelt, 1988). Hauser and shugan used defensive marketing terminology for the first time at 1983. Therefore, it could be considered a new terminology in marketing literature.  This might explain the lack of studies related to this body of knowledge.  As offensive marketing strategies work, defensive marketing strategies use the elements of marketing programs (i.e. promotional and marketing mix elements) in defending the current markets and revenues to grow. However, world growth has slowed after the September 11th terrorist attack (Loomis, 2002; Marketing, 2002; and Sarsfield, 2002), and many MNCs are finding that the only way to grow is by taking market share from the competition (Caudron, 1994). Pressures from competitors, changing customer needs, and the macro-economy continuously confront businesses, requiring them to constantly evaluate and change their strategic goals (Hao, 2000; McEvily et al., 2000; and Inkpen, 2000). Annexing, opportunities, competitors, and resources are globally viewed, as most of the millennium MNCs realize they must be proactive to survive and succeed (Lerouge, 2000). All these dramatic changes in today’s businesses entail adopting a vigilant thinking for defending, at least, the status quo. Consequently, special interest should be assigned to defensive marketing strategies issue to deal with the anticipated severe competition waves in the coming decades. Compared with offensive marketing studies (e.g. Davidson, 2000; Wind, 1982; Pessemier, 1982; Urban and Hauser, 1980; and Shocker and Srinivasan, 1979), defensive marketing studies need to be focused more as their way (i.e. researches related to this body of knowledge) still far away from over. Therefore, the current study is an endeavor to identify the factors that affect the selection of defensive marketing strategies. Purba (2002); Erto and Vanacore (2002); Groom and David (2001); Tax and Brown (1998); Tyson (1997); Cotter et al. (1997); James et al. (1994); Malhotra et al. (1994); Desatnick and Detzel (1993); Myers (1993); Cronin and Taylor (1992); Bergstrom (1992); Bolton and Drew (1991a) and (1991b); Berry and Parasuraman (1991); Chardwick (1991); Heskett et al. (1990); Fornell and Wernerfelt (1988); Parasuraman et al. (1985); Porter (1985); Fornell and Wernerfelt (1984); Hauser and shugan (1983); Porter (1980); and Hofstede (1980) have contributed to definitions of defensive marketing strategies. They have identified business intelligence strategy (BI), customers service strategy (CS), customer complaint management strategy (CCM), Aikido strategy (AIKO), free telephone line strategy (FTL), focus strategy (FOC), differentiation strategy (DIFF), and cost leadership strategy (CL) as important factors.  Literature regarding defensive strategies and the variables that affect them could be criticized in the following points: first, it is scattered and narrow in its scope of coverage, as each defensive marketing strategy was separately addressed. Second, the variables that affect the selection of defensive marketing strategies were not addressed in the literature before. Therefore, the current study attempts to address this issue by adopting a collective and comprehensive approach that tackle all the viable defensive marketing strategies and the variables that affect their selection in one study. Consequently, this could be considered a pioneer endeavor in this perspective.  Determining patterns of relationships between defensive marketing strategies and four sets of variables orients the current study. These four sets of variables are: Respondents’ demographics. Bank’s objectives. Bank’s rivals. Rivals’ competitive advantages. To achieve the above aims, the following hypotheses are tested: H:“There is a strong and significant relationship between selected defensive marketing strategies (i.e. eight strategies) and respondent’s position, age, educational level, experience, bank’s experience, and number of employees”.

 

Hedging for Global Equity Investing

Dr. Tulin Sener,  State University of New York - New Paltz, NY

 

ABSTRACT

In short run, substantial currency surprises exist, including forward rate biases and cross-product terms.  However, significant negative co-variances between the currency contributions and asset returns may result in less or no need for hedging.  A natural insurance takes place. Hedging decisions for a single investment versus a global equity portfolio may not be the same for a given period. The determinants of the optimal hedge ratio for the portfolio are defined as the domestic asset weight, co-variances between asset returns and currency contributions, as well as the variance of the currency contribution.  Hedging is less effective for longer time periods and for the global portfolios with larger domestic asset weights (i.e. 80 percent).  When investing globally, currency risk and hedging become the major concern, which is one of the most controversial issues in literature.  Jorion (1989) and Gastineau (1995) accept the presence of hedging benefits in the short run, but not in the long run.  Hauser, Marcus and Yaari (1994) verify the benefits of hedging in developed countries (DCS), but not in emerging markets (EMS).   Further, the impact of the forward rate bias, or the currency surprise, on the hedged and unhedged returns and risk is emphasized significantly (Ankrim and Hensel (1994), Gardner and Wuilloud  (1995), Clarke and Tullis (1999), Baz et al. (2001) and Cornelia (2003)). In the long run, the depreciation of one currency may be offset by the appreciation of another.  Conversely, in the short run, currency markets may have significant direct and indirect effects on the dollar return and risk of a foreign investment.  The indirect effect represents the association between asset and currency returns.  The literature ignores the indirect effect and calculates the dollar (base currency) return by assuming additivity.  With current volatile and interdependent equity and currency markets, particularly in EMS, returns and risks of assets and currencies cannot be additive (e.g. Solnik (2000, p. 128 and 575) and Sener (1998)).  The study extends the Filatov and Rappoport Model (FRM) (1992) of portfolio risk-minimization and it is different in several ways.  First, the FRM deals with hedging only for global portfolios assuming additive asset and currency returns, while this study considers hedging for both global portfolios and single assets assuming non-additive returns. In dollar return and risk calculation, currency contribution is used instead of currency return, which takes into account the cross-product term between asset and currency returns.  Second, the hedged dollar return is formulated in terms of the return on the unhedged asset and explicit currency surprise in this study, while it is constructed in terms of the returns on the hedged asset and currency in the FRM. The study shows the equivalence of the two approaches, and it indicates that, even in the case of unbiased forward rates, the unhedged and hedged returns for a single asset may not be equal to each other because of a significant cross-product term. Third, it is shown that the impact of hedging on the portfolio return depends on the currency surprise, which is comprised of the forward rate bias and the cross-product term, as well as the domestic asset weight.   Finally, the empirical model includes a much larger set of equity indexes from DCS and EMS, and it compares short term strong vs. weak dollar economic cycles.  The paper is in five sections.  Following the introduction, Section II presents the model.  Sections III and IV provide, respectively, the data and estimation procedures and findings. Finally, Section V sets forth the summary and conclusions.  Where rf and rcc denote, respectively, the foreign return and the currency contribution (rcc).  Here, the currency return (or loss) (rc) is the direct effect and the cross-product term (rc ´ rf) is the indirect effect, or the return due to diversification  (Booth and Fama (1992) and (Sener (2001)).   The cross-product term may become an important ingredient of the dollar return and risk, as the volatility of the market increases.  For example, in equation (2), when a negative rc and a positive rf  take large values, then, in equations (1.A) and (1.B), the loss of currency will be larger, but the risk of the dollar return will be smaller.  Negative correlations create diversification benefits and reduce the need for currency risk hedging.  If the US investor wants to remove total effects of currency fluctuations on the dollar return, he can hedge the foreign currency proceeds by selling short forward contracts.   In other terms, the investor wishes to achieve a hedged dollar return that equals the foreign return.   If the transaction costs are neglected, the hedged dollar return on the foreign asset (rh$) will be equal to: Where, h is the hedge ratio and f is the forward premium, and (rcc-f) is the currency surprise.  The currency surprise represents the return to the forward contract.

 

Finance Faculty’s Understanding and Acceptance of Accreditation for Chartered Financial Analyst in Taiwan

Dr. Mei-hua Chen, National Changhua University of Education, Taiwan

Dr. Bryan H. Chen, National Changhua University of Education, Taiwan

 

ABSTRACT

Taiwan joined the World Trade Organization (WTO) in January 1, 2002, and now faces significant competition in many respects from other members of the WTO. As far as financial education in Taiwan is concerned, it is urgent to prepare financial students in Taiwan for a global market. For instance, there are many universities in the United States currently offering CFA-oriented degrees or CFA exam preparation courses because the CFA designation is becoming one of the fast-growing credentials among investment professionals worldwide. The findings from this study provided information to financial education faculty for planning business courses in order to meet students' needs and also to capture future business trends. The findings also assist the selected finance departments design more appropriate curricula in their finance majors. The Association for Investment Management and Research (AIMR) is in charge of more than 45,000 investments professionals from more than 95 countries all over the world (Business Wire, 2000).  AIMR, a non-profit organization, administers the Chartered Financial Analyst (CFA) designation that requires passing a series of three-level six-hour exams and fulfilling a three-year related experience (CPA Journal, 1998). In fact, AIMR supervised the CFA examinations at 170 exam sites in 73 countries on June 2 or June 3 each year (for Far East and Indian Subcontinent). The CFA candidates have to prepare broad subjects such as ethical and professional standards, asset valuation, financial statement analysis, quantitative methods, and economics and portfolio management. The CFA exams are written in English and  include multiple choice, problems, cases, essays, and item set questions (Business Wire, 2000).  In general, the CFA program candidates would spend 250 to 300 hours preparing for each level exam in the self-study CFA program (Business Times, 2000).  According to AIMR, most of the CFA program candidates are analysts, portfolio managers, investments sales professionals, brokers, traders, and accountants who are working at investment firms, banks, or insurance companies (Cardona, 1994). There is a tremendously increasing number of financial professionals taking the CFA exam since 1963, especially equity analysts (Corporate Financing Week, 1998). Tomas A. Bowman, President and Chief Executive Officer of AIMR, indicated that several reasons have spurred demand for the CFA designation. First, the CFA designation is becoming one of the fast-growing credentials among investment professionals worldwide because a globalization of capital markets has a created need for global-recognized professional credential. Second, the rapid expansion in enrollment in the CFA program in Asia that reflects growing recognition of the CFA designation as the most rigorous and universally accepted professional standard in the investment industry. Finally, the CFA exams provide the latest knowledge that investment professionals need to realize up-to-date global marketplace practice requirements (Canada News Wire, 2000).  The purpose of this study was to determine and compare the perceptions held by university finance faculty in Taiwan regarding their understanding and acceptance of accreditation for chartered financial analyst. The following research questions guided the study: 1. To what extent do finance faculty desire to help promote the CFA program? 2. What are the differences in the desire of finance faculty to help promote the CFA program based upon their personal information analysis and school information analysis? 3. To what extent do finance faculty desire to improve administration of the CFA faculty scholarship program? 4. What are the differences in the desire of finance faculty to improve administration of the CFA faculty scholarship program based upon their personal information analysis and school information analysis? 5. To what extent do finance faculty desire to improve administration of the CFA student scholarship program? 6. What are the differences in the desire of finance faculty to improve administration of the CFA student scholarship program based upon their personal information analysis and school information analysis?  This study used a national survey to gather data in an attempt to present a more complete picture of the perceptions held by university finance faculty in Taiwan regarding their understanding and acceptance of accreditation for chartered financial analyst. The population for this study consisted of the entire university finance faculty in Taiwan. There are 33 universities within Taiwan that housed finance departments in 2003. A list of these 33 universities in Taiwan was obtained from Ministry of Education in Taiwan. The researcher-developed survey was sent out to 320 finance faculty in these 33 finance departments.  The survey instrument was comprised of five sections that used a checklist response format. Section I requested finance faculty’ personal information.  Section II requested finance faculty’ school information. Section III requested finance faculty to indicate their CFA program information. Section IV requested finance faculty to indicate their CFA professor scholarship program information. Section V requested finance faculty to indicate their CFA student scholarship program information. The survey was modified from the AIMR 2001 Faculty Survey (2001). The modified survey was developed based upon the experiential background of the researchers working as finance professors with the Department of Business Education at the National Changhua University of Education, Taiwan. The original mailing on November 20, 2002, generated 76 responses with an additional 42 responses received after the December 11, 2002, follow-up mailing.

 

Assessing the Measurement of Organizational Agility

Dr. Norizan M. Kassim, UAE University, Al-Ain, U.A.E.

Dr. Mohamed Zain, UAE University, Al-Ain, U.A.E.

 

ABSTRACT

This research examines four factors of agilityľ enriching customers, mastering change, leveraging resources, and cooperating to compete and how they relate to the use of information technology and information systems by Malaysian firms in their efforts to become more agile and competitive. Measures of all the four factors of agility were developed and empirically tested using confirmatory factor analysis. The results of the measurement instrument developed for the four factors of agility indicate acceptance of the psychometric properties of the scale.  Rapid changes and challenges in the dynamic information technology environment, and increasingly strong pressures from hypercompetitive markets have forced Malaysian firms to turn to information technology (IT) and information system (IS) to improve organizational agility and to expand globally. IT/IS has now becomes the only way to cope with today’s volume and complexity of data.  However, as technology becomes more matured, management now focuses on controlling the business rather than the technology. Thus the focus now is on how do firms become agile in the market and can this agility be sustained or enhanced through the use of IT/IS. Agility is a necessary ability in the revolutionary turning of the business environment into a turbulent place of competition (Sharifi and Zhang, 1999). Gujrati and Kumar (1995) have even come up with a motto for agility which says agility is "reconfigurable everything." Firms that are agile are those that are able to manage and to adapt to the changes that occur in their environment. In other words, firms need to be fast and lean and be responsive to change in order for them to grow and to maintain or expand its profitability.  The term agility has drawn a lot of attention in the world of business (Lo, 1998). Generally, agility is the ability of a firm to face and adapt proficiently in a continuously changing and unpredictable business environment.  Agility is not about how a firm responds to changes, but it is about having the capabilities and processes to respond to its environment that will always change in unexpected ways. Kodish et. al (1995) refer to agility as the firm's nimbleness to quickly assemble its technology, employees, and management via a sophisticated communication infrastructure in a deliberate, effective, and coordinated response to changing customer demands in a market environment of continuous and unanticipated change. Thus, the concept of agility comprises two main factors: proper response to change and exploiting and taking advantage of the changes (Dove, 1996; Kidd, 1995). Historically, agility of firms was first identified by Goldman, Nagel, Preiss, and Dove (1991) in their 21st Century Manufacturing Enterprise Strategy report. Agility focuses on the use of IT/IS to provide strategic directions and capabilities to help organizations to be competitive to face change.  It is about having a strategic management to help a company stays lean and flexible to face uncertain and unpredictable changes, and the presence of IT/IS is to make sure that the company will improve its efficiency in getting job done with a minimum waste, and its effectiveness in selling its products or services and of building up a loyal and expanding customer base. With the use of IT/IS, firms now are able to join global markets. Indeed, to be agile, firms need to carry out strategic agility planning that will indirectly form a structure that will fulfill customer needs and offer the right products and services at the right time with the right quantity. For example, US airlines companies were among the first to use the technology (Goldman et. al, 1991) to achieve these objectives. The use of information technology has given the companies in the industry, such as the American Airlines, a huge competitive advantage vis-ŕ-vis other airline companies (Neo, 1988; Niketic and Mules, 1993; Lucas, 1997). The success of the usage of IT/IS by a firm can be measured in terms of its success in generating information to achieve its strategic objectives. Therefore, it is essential for the firm to create an appropriate information management strategy and systems infrastructure to support it (Bentley, 1998). As pointed out by Wilson (1997), decisions are made in support of the organization objectives, and nowadays information is required to be handled more freely and openly. Moreover, IT/IS lets the management build systems and provide tools to extract information from online databases to support decision making. A previous survey reported that nearly three-quarters of MIS executives said that their organizations have implemented information systems during the last year solely to get ahead of the competition (Thierauf, 1993).  According to Mates, Gundry, and Bradish (1998), agility is a widespread strategies set by firms to face unpredictable changes and to be competitive in business market so that they can compete with any firm.  Nevertheless, there is little formal description in the agility literature about what activities that accompany the changes. For example, firms need to establish processes that will allow them to master change, and to face change anytime and anywhere. Within this context, the enduring nature of IT/IS may be easily overlooked or neglected. 

 

The New Economic and Social Model – A Third Stage of Economic and Social Development in Brazil in the Millennium: One Brazil-Shared Humanity –Wealth Creation and Social Justice

Dr. Richard Trotter, University of Baltimore, Baltimore, MD

 

ABSTRACT

Within the last fifty years Brazil has embarked upon an economic program of industrialization and economic development. The first phase of its program focused on economic development through import substitution and considerable state intervention in the country’s economic policy. During the 1990’s the Brazilian government entered upon the second stage of its economic development policy: a market model based on fiscal discipline, trade liberalization, privatization and deregulation. This model produced economic growth and prosperity for the upper and middle classes. Notwithstanding Brazil’s economic growth income disparity has increased with the result that Brazil is in effect two entities: a first world and a third world country. If the Lula administration is to succeed, it must develop an economic model that combines market economy efficiencies while addressing the enormous economic and social inequalities existing in Brazil. Additionally the Lula administration must address structural issues relating to income distribution arising out of race discrimination, labor policy, educational reform as well as creatively borrowing from the experience of the United States, Canada, China and other Asian countries. John Williamson, a Senior Fellow at the Institute for International Economics in Washington suggested in 1989 ten reforms that he believed should be undertaken in South America to enhance the region’s economic and social development. The reforms included the following: Fiscal discipline. This was in the context of a region where almost all the countries had run large deficits that led to balance of payments crises and high inflation. Reordering public expenditure priorities. This suggests switching expenditure in a pro-poor way from things like indiscriminate subsidies to basic health and education. Tax reform. Constructing a tax system that would combine a broad tax base with moderate marginal tax rates. Liberalizing interest rates. Competitive exchange rate. Trade liberalization. Williamson stated that there was a difference of view about how fast trade should be liberalized. Liberalization of world foreign direct investment. Williamson did not include comprehensive capital account liberalization. Privatization. This was the one area in which what originated as a neoliberal values had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their time value, but the evidence is that it brings benefits when done properly. Deregulation. This focused specifically or easing barriers to entry abolishing regulations designed for safety and environmental reasons. Property rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable costs. Not included on Williamson lists but clearly an important issue for all of South America and other developing countries is corruption. This paper will focus on Brazil, but also in many respects can also apply to Argentina, other South American countries and developing economies throughout the world. The Brazilian government during the 1990’s applied many of the proposals suggested by Williamson, and yet as of 2003 income inequality in Brazil remains one of the most extreme in the whole world making Brazil in effect two countries. Paulo Sergio Pinheiro, writing of the inequality in Brazil has observed:  The paradox of Brazil today is that it represents at the same time the best and the worst of worlds. The country is the tenth largest economy in the world with a gross domestic product (GDP of U.S. $417 billion – and therefore is part of a group and formed by the USA, Japan, Germany, France, Italy, Great Britain, Canada, Spain and Russia.2  Yet despite this fact the income distribution is among the most unequal in the world. Joao Seboia, has observed:  As far as distribution of earnings is concerned, … the situation worsened throughout the 1980’s, The Gini index, which was already high by international standards increased even more reaching 0.630 in 1989. At the end of the decade, the top 1% of earners together received more than the bottom 50% of earners. The improvement in the distribution of earnings in 1990’s occurred in a perverse way, namely through a fall in earnings in all income levels which was more significant for higher income level. Nevertheless an analysis of the available data up to the end of the 1990’s confirms that we are still a long way from what might be termed a process of genuine income distribution. In 1995, the richest 20% in six metropolitan regions under consideration received 63% of all income while the poorest 50% received only 12%.3 Why despite progressive economic policies has this situation persisted? This paper will focus on a third phase of economic and social development in South America.

 

Critical Success Factors in the Client-Consulting Relationship

Dr. Steven H. Appelbaum, Concordia University, Montreal, Quebec, Canada

 

ABSTRACT

The primary intent of this study is to examine recent projects involving external management consultants at a N. American telecommunications firm, from the employees’ point of view, to measure the extent to which the aforementioned “critical success factors” were perceived as being evident. A secondary purpose was to examine which, if any, of these factors differ between more or less successful consulting projects with a view to building a model to predict employees’ perceptions of the level of the projects’ success. A third objective was to gather employee opinions on other factors that might contribute to the success of consulting projects. A fourth, and final, objective was to gather general employee opinions on the use of management consultancy at a N. American telecommunications firm. A total of 102 employees responded to a questionnaire consisting of 59 questions. A model including six independent variables was able to predict overall rating of project success, with an adjusted R2 =0.68, F=27.81 (p<.0001).  The significant variables, in order of importance, were: the solution took into account our internal state of readiness;the project included prototyping new solutions;the project deliverables were clear;the consultant partnered with the project team throughout;the consultant was professional; and the consultant understood our sense of urgency.  There were substantial differences seen on most measures between projects judged “successful” and projects judged “not successful”. Nevertheless, it is encouraging that many of the success factors suggested in the literature, and proposed under “an ideal client-consultant engagement”, were judged as present in management consulting projects at the telecommunications firm, to one degree or another.  General opinions of management consultants were mixed and somewhat negative. Employees at the telecommunications organization do not agree with the traditional benefits of management consultants promoted by the industry. Finally, the results of this study certainly support the anecdotal and theoretical models in particular those emphasizing the importance of process issues, the client-consulting relationship and their impact on project outcome.  Management consulting is here to stay. Though the industry may have been tarnished with last year’s Enron scandal, it is nevertheless a resilient and highly successful trade. According to Industry Canada, the total number of management consulting establishments exceeded 26,000 in 1998, with total revenues approaching $5.7 billion Cad. (Industry Canada, 2001).  Since 1990, overall revenues in management consulting have grown 10-30% per year. Thus, the use of management consultants is very widespread. In fact, a US Department of commerce survey conducted in 1998, cited in Industry Canada’s report, reported that 70% of all businesses and government organizations in Canada have used the services of a management consultant at least once in the last five years. Finally, they note that the management consulting industry is a key recruiter of business school graduates and has become a desirable employer, currently almost 40% of graduates in each MBA class attempt to enter the consulting industry.  Many authors have noted that, despite the size and significance of this industry, there is does not seem to be a correspondingly large wealth of empirical data on the practice of management consulting.  Much of the theoretical framework described is derived from anecdotal evidence presented by numerous authors. Where available, this article will draw upon empirical evidence. Moving from the general, to the specific, the paper will focus on aspects of the client-consulting relationship. A simple model of key critical success factors will be proposed. Finally, the article will consider these critical success factors through a case study of a N. American telecommunications organization.  Armenakis and Burdg (1988) published an extensive review of consultation research up to the late 1980s. One key area proposed for further research is that of consultation success.  Many investigations have cited criterion problems in determining the success of consulting efforts. They note that “hard” criteria, such as productivity and profitability are often not applicable to consultant programs. Instead, much of the research on consultation is based on criteria that such as self-reported measures of satisfaction, leadership and group process. Further, early studies of OD effectiveness tended to focus on comparisons between techniques used, rather than the actual behaviors exhibited by consultants during the intervention process.  O’Driscoll and Eubanks (1993) applied a behavioral competency model to organizational development interventions to assess perceived frequencies of a range of consultant behaviors, goal setting activities and their contribution to overall consultation effectiveness. The indicators of effectiveness they used included organizational outcomes, organizational processes and characteristics of the consultation itself.  Major contributors to effective consulting, for consultants, were data utilization and setting of specific goals.

 

A Study on Entrepreneurial Attitudes Among Youths in Malaysia. Case Study: Institute Kemahiran Belia Negara, Malaysia

Jumaat  Abd Moen, National University Malaysia, Bangi, Selangor, Malaysia

Ishak Hj Abd Rahman, National University Malaysia, Bangi, Selangor, Malaysia

Mohd Fairuz Md Salleh, National University Malaysia, Bangi, Selangor, Malaysia

Rohani Ibrahim, Universiti Teknologi Mara, Syah Alam Selangor,Malaysia

 

ABSTRACT

The objective of this study is to know the entrepreneurial attitudes among the IKBN trainee youths in Malaysia and to identify the factor demographic relationship, educational back ground, respondents’ experience, parental education and job with entrepreneurial attitude orientation. The society has given hopes to IKBN as the first skill center for expanding the entrepreneurship among the trainees. To understand clearly the factors that can influence the entrepreneurial attitude orientation, a model that forms the entrepreneurial attitude orientation was proposed. In this study, a test instrument on entrepreneurial attitude and questionnaires about respondents’ demography was used. This study was carried out on all the IKBN trainees in Malaysia.  Findings have shown that the residential area, field area in IKBN and in school, parent’s education and fathers’ occupation has a significant relationship with entrepreneurial attitudes. A businessman or an entrepreneur has a certain quality personality. Quality personality aspects consist of attitudes, values and the spirit to achieve success. Morris (1985) regarded attitudes attributes as one of the quality aspect that is important because the attitude of a person plays an important role to determine whether a man has interest in a particular business.  Schumpeter (1934) has introduced the innovation concept as the basis of entrepreneurship. He says that entrepreneurs are those with innovation in the following matters: To introduce goods or new services. To introduce new methods on production. To handle new market. To source out for new raw materials. To manage a new organisation for any industry. Schumpeter statement has identified to us that entrepreneurs are those who are innovated in manufacturing any product.Today, entrepreneurial activity has been accepted generally as supplying positive and productive contribution in the economic development of the country. Entrepreneurs are related as an essential movement modal agent, using indigenous sources, to produce market and to manage business (Pasual, 1990). With that the society have such high impression towards the entrepreneurship. This perception may change some of the individual’s attitude that may use the entrepreneur field that may be jobless. This should be an attractive field for the IKBN trainees who have chosen the entrepreneurial field as one of the attractive occupation.  To be a successful entrepreneur, one should have the successful characteristic of an entrepreneur. Dewing (1919) has listed the most important characteristics of a successful entrepreneur as follows: Imagination ability; Effort ability; Deliberation and maintenance.  The factors mentioned above shows that the most important characteristics of a successful entrepreneur depends on the imagination ability, effort ability and deliberation and maintenance, of an individual that has relationship with the individual’s attitude. Positive attitude towards entrepreneur ship can give a person the ability to imagine, effort ability as well as deliberation and maintenance in managing a business. According to Daim (1994) graduates should think as a person who creates job than a person who seeks job. Circumstances have changed and gone were the days when they had to wait for a job. The country needs entrepreneurs with caliber and vision for the future to continue contribution to ensure the country’s economic development is maintained and improved. Therefore the contribution of the new generation of Malaysian’s students’ entrepreneurs’ includes the IKBN trainees who have acquired their skill and training through the curriculum found in IKBN. The skills and training practical that was acquired should have been used as a base towards cultivating attitudes and basics to venture into the entrepreneurial field.  Even though a person has the business knowledge and entrepreneurship, but if they do not have a positive entrepreneurship, they may not plunge into this field. A change in their attitude with more positive aspects towards this entrepreneurial field should be emphasized thorough the IKBN curriculum to produce more entrepreneurship among the trainees. Current entrepreneurship may have emphasis on development of a particular business and not on development of the change in the trainee’s attitudes towards entrepreneurship. If this can be proven, one approach is through the entrepreneurial curriculum that needs to be done at the IKBN stage so that we can change the trainees’ attitude and mind set so as not to seek job as accomplishment in life and status in the society but to make IKBN as the source of entrepreneurial production which has potential to improve the trainees in the near future.

 

A Comparison of Economic Reforms and Instability Effects in Three Large Emerging Markets

Dr. Parameswar Nandakumar, Indian Institute of Management Kozhikode, India

Dr. Cheick Wagué, South Stockholm University, Södertörn, Sweden

 

ABSTRACT

An additive decomposition analysis is made of the external sector developments in three countries, China, India and Korea, for the period 1974-2000, for the purpose of distinguishing between the results of policies and external influences. The growth in exports is disaggregated into that due to additional primary exports and that arising from diversification into more value-added manufactures. The effects on imports of constraining primary imports while opening up to valuable capital goods imports are also weeded out. The terms of trade effect on the current account, as well as the effects of increases in debt and in interest on debt, are also separated out. In general, specific reform policies such as exports diversification have succeeded more in Korea and – to a lesser extent – in China, while the Indian experience has been only positive in the recent years. Also, external forces have had their say relatively more in India. However, the feedback from financial instability to the real economy is noted only in Korea, as borne out by granger causality tests.  The contagious 'Asian Crisis' of the 1990s has added new dimensions to the discussions of the relative merits of various development strategies, of those labeled as export-oriented and inward looking in particular. Thus there is an emerging view that the performance of the so-called Asian Tigers over the last few decades ought not to be considered as an unqualified success, with a correspondingly (more) understanding eye being turned towards the enigmatic Asian giants, China and India. But while such a willingness to shed established prejudices is a welcome sign, fresh conclusions tend to be based on the immediate pre- and post-crisis developments in the dynamic Asian economies.  In this paper, a comparative study of China, India and Korea covering the period 1974-2000 is attempted The period chosen covers the years of the greatest Asian successes as well as the crisis years, and the analysis focuses on the developments in the external sector, with particular emphasis on the current account which is used to mirror policy choices and external factors affecting the economy.  A disaggregated analysis of the developments in the current account - which is in itself of primary concern for heavily indebted Asian countries - is used as a take-off point for the analysis as in Joseph and Nandakumar (1994). But the menu of policies considered here is more extensive, and includes financial liberalization, which would figure as an important factor for these economies at least in the 1990s. Thus the impact of capital flows and resulting exchange rate changes on the current account is considered specifically. where CD, M and X are the current account deficit, value of imports and exports, respectively, in domestic currency (the subscript t representing time period is omitted for convenience) Pm  and Px represent import and export unit value indices, and "r" is the average interest rate on debt.  D is the stock of external debt (obtained as the stock in dollar terms times the exchange rate ` e ´) and T is net transfers, inclusive of investment income, in domestic currency. Table 1 provides a complete list and explanation of all the symbols used in the paper. All the variables listed in the table are in domestic currency units unless otherwise stated, and the qualification constant prices refer to 1980 prices.   TABLE   1    HERE  It will be useful to express the current account deficit as a percentage of GDP, since comparisons between entities of varying sizes and currency units are being undertaken. So, rewriting equation [1], in term of ratios to GDP, and differentiating,

 

New Evidence on the Impact of Federal Government Budget Deficits on the Nominal Long Term Interest Rate Yield on Moody’s Baa-Rated Corporate Bonds

Dr. Carl T. Massey, Jr., Armstrong Atlantic State University, Savannah, GA

Dr. Richard T. Connelly, Armstrong Atlantic State University, Savannah, GA

Dr. Richard J. Cebula, Armstrong Atlantic State University, Savannah, GA

 

ABSTRACT

This study empirically investigates the impact of the federal budget deficit on the nominal long term interest rate yield on Moody’s Baa-rated corporate bonds over the period 1946-2002. In a system that includes the ex ante real short term interest rate, expected inflation, changes in per capita real GDP, and the ratio of the federal budget deficit to the GDP level, IV estimation reveals that the total budget deficit has acted significantly to raise the nominal corporate bond yield. This finding is consistent with certain earlier studies and implies the possibility of at least partial “crowding out.”   The impact of federal government budget deficits on interest rates has been studied extensively [Barth, Iden and Russek (1984; 1985), Carlson and Spencer (1975), Cebula (1988; 1997), Feldstein and Eckstein (1970), Findlay (1990), Hoelscher (1983; 1986), Holloway (1988), Johnson (1992), Mascaro and Meltzer (1983), McMillin (1986), Ostrosky (1990), Swamy, Kolluri, and Singamsetti (1990), Zahid (1988)]. These studies typically are couched within IS-LM or loanable funds models or variants thereof. Many of these studies find that the federal budget deficit acts to raise longer term rates of interest while not significantly affecting shorter term rates of interest. Since capital formation is presumably much more affected by long term than by short term rates, the inference is often made that federal government budget deficits may lead to at least partial "crowding out" [Carlson and Spencer (1975), Cebula (1985)]. This study seeks to investigate the impact of the federal budget deficit on the nominal Moody’s Baa-rated corporate bond interest rate yield over the long run. The “long run” for purposes of this study begins with the end of World War II, 1946, and runs through the year 2002. No published study to date has included such a long time frame in a single estimate. In focusing on such a relatively long time frame, this study seeks in effect to discover whether there is a “historical” impact of deficits on nominal long term interest rates. The focus is on the nominal interest rate yield on long term corporate bonds rather than a short term interest rate yield because, according to conventional macroeconomic theory, it is the long term interest rate that influences aggregate capital formation/investment decisions.  Section II provides the system for the empirical analysis. Section III defines the variables in the empirical model and describes the actual data, including the measurement of expected inflation and the ex ante real short term interest rate. Section IV provides the empirical results, whereas a brief summary is found in section V.  II. The Basic Framework  In developing the underlying framework for the empirical analysis, we first consider the following intertemporal government budget constraint: To identify the significant determinants of the nominal long term interest rate yield on corporate bonds, including the impact of the deficit on same, a framework is adopted in which the nominal long term interest rate is determined by a loanable funds equilibrium of the following form [Barth, Iden, and Russek (1985), Cebula (1992), Hoelscher (1986)]:  It is expected that, in principle paralleling Barth, Iden, and Russek (1985), Cebula (1992; 1997), and Hoelscher (1986), the real private sector demand for long term corporate bonds is a decreasing function of the ex ante real short term rate yield on Treasury bills. In other words, as EAR increases, ceteris paribus, bond demanders/buyers at the margin substitute the shorter term issues for the longer term ones, ceteris paribus.

 

Issues and Challenges of Accounting Education in China: Practitioner and Academic Perceptions

Tsui-chih Wu, Shih Chien University, Taipei City, Taiwan

Yealing Tong, Takming College, Taipei, Taiwan

 

ABSTRACT

This paper examines and analyzes the current situation and challenges of accounting education in China. Questionnaire surveys were directed to accounting educators from five mainstream universities as well as accounting practitioners. Taken overall, responses from the two groups are quite homogeneous in the majority of issues in the study. The results show China has experienced an increased pace of change in accounting environment due to internationalization and economic openness. While accounting educators, public accounting, and industry see tremendous value in an accounting education, there is an urgent need for changes in curriculum development and teaching, rewarding structure, and ways of communications with the practice community. The Chinese booming economy for the past ten years has indeed brought about serious problems and challenges with accounting education. The research results may provide some early feedback for the development of accounting education in emerging economies. Since 1978, China has adopted an open-door policy and a series of economic reforms. The shift from the centrally planned economy to a market-oriented economy has brought about major renovation and promulgation of new laws and rules in the areas of securities administration and accounting. Particularly, the structural changes in accounting regulatory framework are undertaken to meet the pressure of international comparability and the demand for accounting information by capital providers and business managers. As a result, the goal of reforming the accounting system has been to establish a new conceptual framework that combines the unique Chinese socialist characteristics and the generally accepted international accounting norm.  Chinese accounting reform has involved multiple significant stages, and the Accounting Law of the People’s Republic of China was enacted by the National People’s Congress in 1985. The Accounting Law broadly stated the functions of accounting, the organization of accounting work and the authorities and duties of accounting personnel as well as their legal responsibilities. In the same year, the Accounting System for Sino-Foreign Joint Ventures was issued by the Ministry of Finance. The system included many accounting concepts, principles and rules that were seen as the prelude in the accounting standard-setting process. In 1992, Accounting Standards for Business Enterprises (ASBE) was approved by the State Council, and became effective on July 1, 1993. The main chapters in the ASBE include objectives and users of financial reporting, accounting postulates and principles, qualitative characteristics, elements of financial statements, and recognition and measurement. The ASBE can be seen as a set of basic accounting standards, or a conceptual framework. Most recently in 2001, the Ministry of Finance enacted the new “Accounting System for Enterprises” to substitute various old accounting systems pursuant to industry and ownership. The new accounting system is aimed at improving reliability and comparability of accounting information and enhancing harmonization with international practices.  A serious impediment encountered in the current economic reform and development of securities market, however, has been the lack of qualified and competent accounting personnel. In contrast to the U.S. where the number and quality of students electing to major in accounting is decreasing rapidly and an accounting degree is deemed to be less valuable than other business degrees (Albrecht and Sack, 2000), the situation in China is strikingly different. The economic reform has created huge demand for accounting workers and thus rapid expansion of accounting education (Yu, 1995; Lu, 1995; Chen, Jubb and Tran, 1997). As China has gained its bid for the World Trade Organization (WTO) entry in December 2001, the openness of its financial and securities markets has posed unprecedented challenges to the accounting profession. Further, in the process of gradual internationalization, China still has to face technology revolution and globalization that has significantly impacted business environment worldwide. Despite the mushroomed demand for the qualified accountants, the lack of professional education and the lack of knowledge of modern business activities has hindered the development of the profession. This paper, therefore, aims to analyze the current situation and challenges of accounting education in China. In particular, this paper is defined as a comparative study, With an aim to understanding the issues and dilemma of accounting education in transitional economies, such as China.

 

Exchange Rate Regime Choices for China

Dr. Xiaoping Xu, Huazhong University of Science & Technology, Wuhan, Hubei, P.R. China

 

ABSTRACT

What exchange rate regimes should China adopt in the 21st century? This paper analyzes some different exchange rate regime choices for China’s economy. It indicates that in China’s case, a fixed exchange rate regime can not mitigate both real and financial economic shocks in the long run; a complete floating exchange rate regime is not appropriate for China either because free movements of international capital and floating exchange rates are basically incompatible in China. The conclusion that flows from this paper is that the future choice for China’s exchange rate regime is a floating rate within a band system. It is an astonishing fact that all the massive crises of the past ten years—the really big ones –- have been associated with the collapse of formally fixed or quasi-fixed exchange rate systems: Mexico in 1994; the three Asian IMF program countries of Thailand, Indonesia, and Korea; Russia in 1998, in many ways the most consequential of the crises for the rest of the world; and Brazil in 1998 and 1999. These crises offer strong evidence about the role of exchange rate systems (Fischer, 2000). So rethinking the Chinese exchange rate regime choices is of great importance now.  Since October 1987, the IMF has classified China as having a managed floating exchange rate regime, which is de facto peg arrangements under managed floating. China has successfully maintained its exchange rate stability for over a decade even during the Asian Financial Crisis in 1997, with the assistance of capital controls, providing an important element of stability in the regional and global economies. However, after China became a member of the WTO in 2001, China will experience more fluctuations in financial markets because it relaxes its restrictions on capital movements and liberalizes its financial markets. China has to choose a more flexible exchange rate regime in the near future to adjust both the nominal and real shocks. However, this choice is not easy, and this process will be gradually achieved.  This paper first reviews briefly the arguments for fixed and floating exchange rate regimes. In the next section, it presents a history of China’s fixed exchange rate regime in 1990s. It then tries to evaluate the floating exchange rate regime for China, and finally it suggests that China should choose a floating exchange rate within a band system in the near future.  The breakdown of the Bretton Woods system has not stopped the debate about the relative merits of fixed versus floating exchange rate regimes. In this section the arguments for fixed and floating exchange rate regimes are reviewed.  The case for floating exchange rates has two main elements: money policy autonomy and automatic trade balance adjustments.  Monetary policy autonomy: It is argued that a floating exchange rate regime gives countries monetary policy autonomy. Under a fixed system, a country’s ability to expand or contract its money supply as it sees fit is limited by the need to maintain exchange rate parity. Monetary expansion can lead to inflation, which puts downward pressure on a fixed exchange rate (as predicted by PPP theory). Similarly, monetary contraction requires high interest rates (to reduce the demand for money). Higher interest rates lead to an inflow of money from abroad, which puts upward pressure on a fixed exchange rate. Thus, to maintain exchange rate parity under a fixed system, countries were limited in their ability to use monetary policy to expand or contract their economies.  Advocates of a floating exchange rate regime argue that removal of the obligation to maintain exchange rate parity restores monetary control to a government. If a government faced with unemployment wanted to increase its money supply to stimulate domestic demand and reduce unemployment, it could do so unencumbered by the need to maintain its exchange rate. While monetary expansion might lead to inflation, this in turn would lead to a depreciation in the country’s currency. If PPP theory is correct, the resulting currency depreciation on the foreign exchange markets should offset the effects of inflation. Put another way, although under a floating exchange rate regime domestic inflation would have an impact on the exchange rate, it should have no impact on the country’s businesses’ international cost competitiveness due to exchange rate depreciation. The rise in domestic costs should be exactly offset by the fall in the value of the country’s currency on the foreign exchange markets. Similarly, a government could use monetary policy to contract the economy without worrying about the need to maintain parity.  Trade balance adjustments: Under the Bretton Woods system, if a country developed a permanent deficit in its balance of trade that could not be corrected by domestic policy, the IMF would agree to a currency devaluation. Critics of this system argue that the adjustment mechanism works much more smoothly under a floating exchange rate regime. They argue that if a country is running a trade deficit, the imbalance between the supply and demand of that country’s currency in the foreign exchange markets will lead to depreciation in its exchange rate. An exchange rate depreciation should correct the trade deficit by making the country’s exports cheaper and its imports more expensive. The case for fixed exchange rates rests on arguments about monetary discipline, speculation, uncertainty, and the lack of connection between the trade balance and exchange rates.  Monetary discipline: The need to maintain a fixed exchange rate parity ensures that governments do not expand their money supplies at inflationary rates. While advocates of floating rates argue that each country should be allowed to choose its own inflation rate (the monetary autonomy argument), advocates of fixed rates argue that governments too often give in to political pressures and expand the monetary supply far too rapidly, causing unacceptable high price inflation. A fixed exchange rate regime will ensure that this does not occur.  Speculation: Critics of a floating exchange rate regime also argue that speculation can cause fluctuations in exchange rates.

 

Sources of Competitive Advantage: Differential and Catalytic Dimensions

Fred Amofa Yamoah, International University - London Centre, UK

 

ABSTRACT

A key challenge confronting all managers is achieving consumer satisfaction in an ever-changing business environment. It is necessary to demonstrate that the changing business environment has been a major factor behind the varied sources of competitive advantage and corporate strategies that are implemented by many businesses. A literature review spanning over six centuries provided, highlights the need to differentiate between sources of competitive advantage for corporate success. Towards this goal, a two-tear classification, termed remote and immediate sources of competitive advantage is proposed with a discussion on its theoretical and managerial implications, and an outline of future research directions.  Every profit-making organisation invests some amount of resources to create worth for the owner/stakeholders. In the case of non-profit making organisations, the worth generated is employed in various charitable projects.  However, a common challenge to all managers in today’s business environment is achieving customer satisfaction in a changing environment. Hence, business decisions taken and strategies formulated and implemented to address this challenge have always tended to be critical to organisational success. One popular management concept, among others, that has been employed over the years, in an attempt to achieve customer satisfaction, in a constantly changing business environment is gaining competitive advantage (Hoffman 2000).  In its simplest sense, the competitive advantage concept means devising unique features to attract consumers away from ones competitors. In other words, the creation of relatively superior product/service that better serve the needs of consumers. This paper attempts to widen the scope of competitive advantage to cover possible investments, activities or strategies that would help shape the future business environment; and in so doing be leveraged to gain competitive edge. Competitive advantage has been achieved by some businesses through the adoption of strategies ranging between basic competitive adaptation concept of the late 1930s (see Alderson 1937), to relational and intellectual sources of competitive edge, sometimes referred to as intangible assets of the late 1990s (Srivastava et al 1998). Given the aforementioned changes in the form and substance of competitive strategies, it is apparent that the concept has evolved with time from a simple adaptive strategy of specialising suppliers to cater for a varied buyer demand to a complex level of exploiting intangible assets like an organisation’s relational and intellectual assets. What drives all these strategies is the common quest to add unique value for customers and hence achieve corporate targets. It is important to reiterate that behind the background of business environment dynamics, the changing phase of competitive strategy will continue to be an important feature and a challenge to academics, and practitioners operating at national as well as global level.  Alderson (1937), one of the earliest literatures on competition (see Hoffman 2000) on one end of strategic management spectrum asserted that a fundamental aspect of competitive adaptation is the specialisation of suppliers to meet variation in buyer demand. At the other end of the strategic management spectrum, Srivastava et al (1998) also convincingly suggested that, if relational and intellectual market-based assets could be employed to add special value for customers, then it could be leveraged to gain competitive advantage. Within the relational perspective some relationship marketing and brand management researchers have also expressed similar strategic implications for competitive advantage (Aaker 1997, Fournier 1998, Prahalad & Ramaswamy 2000). Competitive advantage thus results from a continuous process of business - consumers engagement to create a joint ‘competitive reality’ for a business and its consumers (Louro & Cunha 2001).  Apart from these examples, other sources of competitive advantage have been documented in academic literature (Alderson 1965, Hall 1980, Henderson 1983, Porter 1985, Day & Wesley 1988, Hamel & Prahalad 1990, Day & Nedungadi 1994). Therefore, there is ample evidence in literature indicating that many sources of competitive advantage and strategies have been suggested and/or implemented to achieve business success. The common precursor to all these strategies is the ever increasing number of competitors in a changing business environment.  Whereas the need for a business to strive for unique features to achieve competitive edge (Alderson 1965) has been popular with managers, Hall (1980) asserted that for a business to succeed in a hostile environment it ought to either achieve the lowest cost or most differentiated position. It is obvious from this position that an organisation that can undertake strategies to become a custodian of most differentiated position in terms of products and services and also maintain the lowest cost within a given industry will be way ahead of its competitors. It is important to note that the attainment of such position is easier said than done. Henderson (1983) in the article ‘The Anatomy of Competition’ emphasized that organisations that are able to adapt best or fastest will gain an advantage relative to their competitors. The message to managers is to respond to changes in the business environment by providing the best option in terms of product/service or be the quickest to respond to the needs of the market.  The importance of speed in meeting the need of a given market in a competitive environment cannot be overemphasised but an additional issue that this study seeks to explore is the possibility of influencing the ‘future market’ to the advantage of a business.

 

Comprehensive Income: Evidence on the Effectiveness of FAS 130

Dr. Bruce Dehning, Chapman University, Orange, CA

Dr. Paulette A. Ratliff, Arkansas State University, Jonesboro, AR

 

ABSTRACT

Statement of Financial Accounting Standards No. 130 (FAS 130) “Reporting Comprehensive Income” requires all publicly traded companies to include a Statement of Comprehensive Income in their set of basic financial statements.  As Comprehensive Income items have previously been disclosed in various parts of the financial statements, listing these items in statement form provides no information that has not already been available.  Therefore, if markets are efficient, the disclosures required by FAS 130 should not affect firm value.  The purpose of this paper is to provide empirical evidence of the usefulness of CI disclosures as required by FAS 130.  In this study, we examine data for firms in periods immediately before and after enactment of FAS 130 rules.  We find that there is no difference in the market’s valuation of comprehensive income adjustments before and after the implementation of FAS 130.  This is consistent with the efficient markets hypothesis in that there is no change in the way the market values the information due solely to the placement of the disclosure.  Unless other benefits of FAS 130 can be shown, this statement appears to require additional information without a commensurate payback.  Effective for fiscal years ending after 12/15/98, all publicly traded companies are required to include a Statement of Comprehensive Income in their set of basic financial statements.  Statement of Financial Accounting Standards No. 130 (FAS 130) defines comprehensive income (CI) as “the change in equity (net assets) of a business enterprise during a period from transactions and other events and circumstances from nonowner sources.  It includes all changes in equity during a period except those resulting from investments by owners and distributions to owners.”(1)  FAS 130 specifically includes items such as unrealized gains and losses on certain marketable securities (SEC), foreign currency items (FCT) and pension liability adjustments (PEN).  As these items have previously been disclosed in various parts of the financial statements, listing these items in statement form provides no information that is not already available.  Therefore if markets are efficient, the disclosures required by FAS 130 should not affect firm value.  The Financial Accounting Standards Board (FASB) holds that “disclosure is not an adequate substitute for recognition… The usefulness and integrity of financial statements are impaired by each omission of an element that qualifies for recognition.”(2) Firms must expend resources to prepare this statement, so if no benefit ensues, the FASB’s requirement adds undue burden on firms required to comply.  Alternatively, if investors find that recognition of CI items reduces their cost to forecast earnings, cash flows or otherwise assist in valuing the firm then the information is value relevant.  The purpose of this paper is to provide empirical evidence of the usefulness of CI disclosures as required by FAS 130 “Reporting Comprehensive Income.”  The results of ex ante association studies of CI and returns have been mixed.  The real measure of whether the new disclosures affect the way the market processes CI information can be measured only after the disclosures have been made.  In this paper, we regress CI on returns for periods before and after FAS 130 statement requirements went into effect to measure whether or not these disclosures are valued by market participants.  In anticipation of FAS 130, studies were undertaken to evaluate the effectiveness of the information to be disclosed as required by the forthcoming statement.  Dhaliwal, Subramanyam, and Trezevant [1999] (hereafter DST) conducted a study weighing the association between returns and measures of income including traditional net income (NI), comprehensive income as defined under FAS 130 (COMP130), and comprehensive income broadly defined (COMPbroad).(3)    Examining firm-years with COMPUSTAT and CRSP data available for 1994 and 1995, their results indicate no clear evidence that CI measures are any better at explaining returns than is net income.  Examining individually the three comprehensive income components requiring disclosure under FAS 130 (marketable securities adjustment, foreign currency translation adjustment, and minimum pension liability adjustment), they find that only the marketable securities adjustment improves the explanatory power (R2) of the model and that this result appears to be driven by financial sector firms.  The other adjustments seem to add only noise to the model, that is, they are not decision relevant.  Biddle and Choi [2002] extend the time frame used by DST to 1994-1998 and examine the relative value relevance of NI, NI130 and NIbroad (variables defined as in DST above) in explaining returns.  Their findings indicate a stronger association between returns and NI130 than either of the other income measures (NI and NIbroad), supporting the FASB’s position that disclosure of comprehensive income is useful. Further analysis indicates that the SEC component provides the greatest improvement to the net income figure. Both the SEC and FTC components yielded incremental value relevance.  Finally, the authors examine the association between NI, NI130 and NIbroad and executive cash compensation.  Here NI dominates CI measures. 

 

On Acculturation of Business Acquisition: The Case of two Machine Tool Manufacturers in Taiwan

Nelson N. H. Liao, Chaoyang University of Technology, Taichung County, Taiwan

 

ABSTRACT

One aggressive machine tool manufacturer in Taiwan acquired another major machine tool manufacturer retaining all employees of the acquired firm and providing its financial and managerial support in the year 2000. The present paper examines the issue of acculturation between the acquiring firm and the acquired firm in terms of organizational culture and employees’ working attitudes. The present paper introduced a conceptual research framework, followed by implementation of a structured company-wide questionnaire, the employees of the two firms responded twice in one year. By way of reliability check, descriptive analysis, Pearson correlation analysis, independent samples t-test, stepwise regression analysis, and interviewed with key members of the two firms, the major findings are as follows: (1) Organizational culture is statistically significant correlated to employees working attitudes. (2) The significant dimensions of employees’ working attitudes at the acquired firm, when compared with their counterparts working at the acquiring firm were reduced one year after acquisition. (3) Retained individual brands’ strategy led to separate preference for the acquiring firm and the acquired firm, Most employees had no consensus on the issue of successful acquisition.  Mergers have proven to be a significant and increasingly popular means for achieving corporate diversity and growth (Nahavandi and Malekzadeh, 1988). In an examination of Dutch merger activities, participating firm managers responded that firms were able to achieve (1) an increase in market power, (2) an increase in sales, (3) the creation of additional shareholder wealth, (4) increased profitability, and (5) marketing economics of scale in most of the mergers (Brouthers, Hastenburg and Ven, 1998).  Two independent streams of management research have studied mergers and acquisitions, one stream has examined the cross-sectional relationship between firm level measures of financial performance and the strategic fit of the acquiring and acquired firms, another stream of research has examined the cultural fit of the acquiring and acquired firms and its impact on the success of the combination (Chatterjee et al., 1992).  Most of the existing researches remain in the theory building stages (Jemison and Sitkin, 1986; Nahavandi and Malekzadeh, 1988). In studying mergers and acquisitions, “culture” has often been mentioned as a variable influencing the implementation of strategic decisions. However, the role of socio-cultural factors and the processes involved in merging two organizations treated as cultural entities have not been studied thoroughly. Theories from cross-cultural psychology are adapted here to explain the processes of cultural adaptation and acculturation in mergers. During the implementation of mergers, the members of the two organizations may not have the same preferences regarding a mode of acculturation, the degree of congruence (degree of agreement). Each one’s preference for a mode of acculturation will be a decisive factor in the successful implementation of the mergers (Nahavandi and Malekzadeh, 1988). The few studies that have tested for a relationship between cultural fit and merger performance selected a fragmented set of criterion variables such as employee motivation and attitudes, and focused on participants in only one merger at a time (Chatterjee et al., 1992).  According to the yearly statistics of the Fair Trade Commission in Taiwan, there were approximately 6 cases of mergers and acquisitions a year during the early 1900s, growing to the present 1,000 per year. In the year 2000, one aggressive machine tool manufacturer in Taiwan acquired another major machine tool manufacturer retaining all employees of the acquired firm and providing its financial and managerial support. The two firms retained their individual brands and operated independently. The owner of the acquiring firm commissioned authors of the present study to undertake company-wide opinion surveys to examine the evolution of organizational features in the acquired firm, which provided the authors with a rare opportunity to examine the issue of acculturation as reflected in systematic differences between the acquiring firm and the acquired firm in terms of organizational culture and employees’ working attitudes. The objective of this paper is to empirically demonstrate how the underling concepts and methodologies of cultural fit and employees’ working attitudes approaches, when combined, can make an important contribution towards understanding the implementation performance of related acquisitions. The objectives of this paper can be summarized as follows: Identifying statistically significant dimensions both in organizational culture and employees’ working attitudes of the acquiring firm versus the acquired firm so that possible sources of incongruence can be identified. Investigating whether the cultural differences between the acquiring firm and the acquired firm include different employees’ working attitudes across groups. Investigating statistically whether the significant dimensions will be reduced both in organizational culture and employees’ working attitudes of the acquiring firm versus the acquired firm so that possible sources of incongruence will be reduced in one year after acquisition.  

 

Perspectives on Privacy

Dr. William H. Friedman, University of Central Arkansas, Conway, AR

 

ABSTRACT

Privacy, while rarely a major social concern before 1900, has recently become a high profile issue, bordering on obsession for the general public as well as for the computer and the business worlds. Discussants of privacy rights often take much for granted, and in the most extreme cases, their assertions about privacy and rights are made in a tone of almost “axiomatic” self-certainty. They typically proceed with the full expectation that the intended audience will assent to the spokesperson’s positions without question. There have been many proposed and actual extensions of the scope of privacy, which have now progressed to demands for shielding virtually all information about anything an individual might wish to keep secret, despite the existence of reasonable, competing values. For example, in the US, a student has a legal right to keep his/her parents, whether they defray the student’s tuition or not, from ever learning from a university that the student has failed every course.  The most common starting point for discussions on privacy is that it is a natural, inviolable right as well as an important value. When a value-laden policy position has attained such unquestioned influence, it is both appropriate and timely to examine critically the extent of its applicability and whether it in fact embodies the overarching values (Gurak 2002) attributed to it. This paper attempts just such an analysis from an information technology, business, social, legal, and philosophical perspective. The values competing with privacy and the matter of the origins of privacy and other rights play a central role in this analysis.  It is useful to consider three definitions of privacy, which give the range of common usages and, therefore, will be used to set the parameters for this paper. The original meaning was apparently a notion like “the quality or state of being apart from company or observation” and then came to mean “freedom from unauthorized intrusion.” (Britannica & Merriam-Webster's Collegiate Dictionary 1997). One should notice that the first definition emphasizes the actions and desires of a person secluding him/herself, and the outsider is regarded as having the role of passive observer. There is even the possibility that the privacy seeker is unaware of being observed. The second definition, however, views the outsider as actively disturbing the privacy seeker without authority or invitation. A physical intrusion, of course, does not usually go unnoticed.  The question of what is the “authority” for the intrusion raises some important issues.  Is it the authority of the privacy-seeker that is involved or perhaps some governmentally sanctioned intrusion? Is the latter justifiable? Are we to extend or understand the second definition also to include intrusions not noticed by the privacy seeker, say, surreptitious scanning of his/her computer files (Brandt 2001).  Is the harm greater if one notices the intrusion only after the fact? Does the harm derive only from the intruder’s causing pain or damage to the person whose privacy is invaded?  A third definition from another source (QPB Dictionary of Ideas 1996) states that privacy is “a right of the individual to be free from secret surveillance (by scientific devices or other means) and from the disclosure to unauthorized persons of personal data, as accumulated in computer data banks.” This definition introduces additional, very important considerations, especially in light of the questions raised in connection with the previous definitions:   Surveillance, not merely observation—the implication here is that there is a more constant observation with a definite purpose,  Disclosure of what is observed (personal data) to unauthorized persons—still, however, leaving open the question of who gives the authorization, and Computer banks—as if other means of storage, like hard copy dossiers, are not of such great consequence now.  Undeniably, what makes “computer banks” so crucial to any discussion of privacy is that they involve rapid collection, storage, retrieval, and dissemination. Additionally, the computer provides for such activities on a massive scale and often in hard to detect ways.  People often unconsciously create, out of their personal desire for something or very strong feelings about something, a right with respect to that something. A partisan of natural law, of course, assumes that his/her notions of privacy are common to all humankind.  A proponent of positive law, on the other hand, would say that laws of privacy are the product of human action and are legitimately imposed by the society on itself.  Aquinas cites human or positive law, “which must be framed by human societies to achieve the order and peace needed for perfection” (Beck 1979).  It will be an important part of this paper to deal with inner peace and diminishment of anxiety due to privacy concerns.  Any society wishing to create privacy laws should, however, be ready to attempt to justify its privacy proposals on grounds acceptable to all parties, irrespective of their views on the nature and source of law, and even to skeptics of the need for such legislation. One way to reach a practical consensus among parties with widely differing views on what makes rights and laws valid is to stipulate, that rights and laws exist to make society function more smoothly and to ensure that the people will be secure and relatively happy. After all, no advocate can consistently contend that nature (or whatever is the source of rights) could be wrong about what is best for society. Even proponents proceeding from absolute metaphysical first principles would find comfort in thinking that their metaphysical beliefs were confirmed by pragmatic and empirically verified success. 

 

Global Trade Model

Dr. Baoping Guo, University of Northern Virginia, VA

 

ABSTRACT

This paper presents a global trade model by incorporating capital flows, consumer goods flows, and intermediate goods flows in conventional multiregional input output model to simulate that the interconnected world economy as whole is various of trade flows. The paper presents global trade equation, export equation, import equation by introducing domestic trade matrixes, import matrixes and export matrixes, which show the structure connection of global economy from different analysis angles. The model suggests that there be the import-export structure interdependence among countries in international trade system. In addition, the paper provides a solution procedure for an exogenous customer-consumption-oriented dynamic multinational equilibrium. With continuing declining in transportation and communication costs and reduction of man-made barriers to the flows of goods, service, and capital, markets are going global. This means that it is increasingly important to understand the implications of international trade and interdependence of countries in the world economy.  The studies of this paper present a global trade model to explore the structure connection to simulate the structure connection of multinational economy and international trade in the real world. It provides comprehensive understanding to international trade flows by showing a method of systematically quantifying the mutual interrelationships of import and export among various sectors of a complex economic system participated by multiple countries. The paper presents general trade equation; export equation, import equation and dynamic trade model to explore the structure connection of multinational economy. The model of this paper integrates multinational economy by domestic trade flow matrix and international trade matrix and further more by three categories as intermediate goods matrix, capital goods matrix and consumer goods matrix.  Traditional theories of international trade have explained the existence and composition of trade between countries in term of international differences in production function, absolute advantage and comparative advantage, and factor endowments. Recently, increased attention has been paid to some other influence such as natural resources, tariffs, size of country, scale of production, and other restrictions or favor factor on trade. The model of this paper suggests that the structure interdependence of international trade be another basic reason, and realty of existed imports and exports, and further trend of the trade. The model also shows that a country’s export fully depends on other country’s economic structure requirement and that the import of a country is structurally determined by a home country’s demands on production, investment and customer consumption. As a computational approach to describe the realty of international trade, the model may also open an empirical explanation to the reason of trade in multinational economy as structure interdependence of import and export. Existed international trade structure does changes by the competition, innovation and translation of technology, and especially capital investment. The paper also explores dynamic effect of capital flows on multinational equilibrium and provides a solution procedure for an exogenous final customer consumption oriented dynamic multinational equilibrium.  International economy is generally understood and discussed in the terms of trade in goods or service.  Existed studies in regional science and multiregional analysis had addressed the concept of trade flows based on Leontief’s input-output analysis since 1960s, which dealt with only intermediate goods flows. Guo (2003) proposed a dynamic multiregional model, which involved both intermediate goods flows and capital flows. The multinational trade model presented in this paper is essentially conventional multiregional input-output model modified to incorporate three trade flows: intermediate goods flows, capital flows, and consumer goods flows together in multinational economic framework.  Consider an economic system subdivided into spatially two countries (country R and country S) and with n economic sectors (industry or service) now. The trade flows indicated by matrix Ars in equation (1) are the intermediate goods flows.  If equation (1) is to be used to model a multinational trade, it must be further modified to account for the trade flows of capital goods and of consumer goods because both of them play important roles in international economy. In order to describer intermediate goods trade flows, capital flows, and consumer goods flow in a system, we develop following interconnected bi-national input-output import-export transaction table (see table 1) to show the structure of three trade flows.  This table depicts the trade flows of materials or service in terms of dollars both between various industries or sectors and between countries, depending on the level of aggregation in bi-national framework. There are three data blocks in it, intermediate goods block, capital goods block, and consumer goods (or final use in input output analysis) block.  The capital input variable ur in equation (2) is total investment demand in destination nation r, which is sum both of capital goods made in home country and of capital goods made in foreign country.  In addition, the capital supply variable vr is total capital goods provided by source country r ,  which is sum both of capital goods used in home country and of capital goods shipped to foreign country.  The customer-goods-demand variable hr in equation (3) is the total amount of consumer goods consumed in destination country r and purchased both from home country and from foreign country. And final customer-goods-supply variable yr is the total customer goods, which are made in home country and should be shipped into both to domestic market and to international market.

 

Mobile Commerce: Customer Perception and It’s Prospect on Business Operation in Malaysia

Dr. Ahasanul Haque, Multimedia University, Selangor, Malaysia

 

Abstract

Use of mobile devices will open significant opportunities for e-commerce, payment services, information services and entertainment. However, adoption of mobile commerce in business to business (B2B) and business to consumer (B2C) sectors has been relatively low in Malaysia. Rapid growth numbers of mobile users, it is crucial for companies to fully understand what influence consumers satisfaction. As consumer perception of services would influence the level of satisfaction, companies should then pay attention to the attributes that are perceived as important by consumers for making choices. Hence, this paper is an early attempt aims to provide empirical data on mobile users’ preferences of services as well as attributes that are perceived as importance when they shop using mobile devices. This paper explains users’ perception towards mobile commerce and its scope as a business marketing strategy in Malaysia. The results of the study indicated significance difference between male and female perception of mobile commerce. In addition, results also highlighted negative influence types of information on the use of mobile commerce among the various groups.  In a new global and knowledge economy, there is a high competition among the organizations to attract customers. The sales cycle and time to make decisions are becoming limited than ever. Moreover, the availability of large volume of information and rapid technology advancement increases customer’s expectations at the faster rate. In order to attract and satisfy the customer needs and to defend themselves against these challenges business organizations should be responsive enough. Meanwhile, the growth and success of the Internet has created new interest as well as new horizons in the information technology development and business strategy. Millions of Internet users are expected to generate large volume of business. Electronic business is considered as a mean to an end in accomplishing these goals.  As a fixed line technology, the Internet has proved to be highly successful in reaching millions of homes worldwide. The growth of the mobile technology in the form of wireless communication also added a new dimension growth of the information technology. Mobile commerce provides easy access to collect information’s from anywhere and anytime. For business, it saves time, increases productivity and improves business performance through continuous mobile access in corporate Intranets and Extranets. It can also provide to expand global markets for new product and services. Mobile communication provided unlimited possibilities to offer new product and services for their customers. As Malaysia is leaping into the information era, mobile commerce technology enables citizens of Malaysia to access Internet via mobile devices and obtain information instantly, from anywhere, anytime. This is a new direction, to capture online communication benefits through mobile communication, and also developed new business paradigm in the field of e-business. The Internet is big; on the other hand, mobile market is also big and still growing at a phenomenal rate. Thus, in view of its impressive growth of technological advancement, whereas e-commerce is moving to m-commerce, this study intends to discuss the scope of mobile commerce an evaluate customer perception as a business strategy in Malaysia.  Mobile and wireless technologies are becoming increasingly pervasive. Mobile phones were once considered a luxury, but are now taking the place of conventional telephones in residential use. Wireless network free users from the tethers that have bound them to their desk, enabling them to live and work more flexible and convenient ways. Mobile technology started more than fifteen years ago with cordless phones.  Convergence of telecommunications and information technologies has revolutionized the way that peoples use these technologies. GSM (Global System for Mobile Communications), I-Mode in Japan and PCN (Personal Communication Network) are the most common technology and used in mobile phones. Other newer technologies are Bluetooth and 802.11b WLAN (Wireless Local Area Network). 802.1b is also known as Wi-Fi (for wireless fidelity) and has been developed by IEEE. These newer technologies offer lightning-fast speed of data transmission around the homes or offices. Wi-Fi networks are built around a series of transmitters that is installed at strategic spots (hotspots) and around the office or home buildings. Users with laptops, handhelds and other wireless devices can be linked and transmit data within 100 meters radius from the network hub. Meanwhile, Bluetooth can offer more “local-based” connections up to 10 meters range.   With Bluetooth, users can create their own PAN (Personal Area Network) and communicate between laptops, printers, mobile phones and other peripherals. This increases personal productivity that is a user can transfer files or synchronizes his PDA without cumbersome cables. At the present, a variety of technological advancements and innovation has developed rather rapidly, particularly in the telecommunication industry. Simultaneously, new developments in consumer behavior and attitudes, competitors’ strategy and others in the market place emerge. 

 

Is Family Ownership a Pain or Gain to Firm Performance?

Jira Yammeesri, University of Wollongong, Northfields Ave., Australia

Dr. Sudhir C. Lodh, University of Western Sydney, Australia

 

ABSTRACT

This study examines the relationship between family ownership and firm performance in Thailand between 1998-2000.  This study focuses on family-controlling ownership, managerial-family and managerial-non-family ownership.  The results show that family-controlling ownership is positively significant to profitability, but it is less significant to market returns.  The results also show that managerial-family ownership has a strong positive relationship with firm performance.  Interestingly, this study finds that managerial-family ownership does not always encourage firm performance as, based on a non-linear analysis, the results show that a certain level of managerial-family shareholding is negatively related to firm performance.  Ownership structure is clearly important in determining firm’s objectives, shareholders’ wealth as well as how managers of a firm are disciplined (Porter, 1990; and Jensen, 2000).  In particular, the structure of ownership has been extensively discussed since Berle and Means (1932) introduced the separation of ownership and management.  The basic assumption is that the overwhelming interest of principals/shareholders is to maximize firm performance, whereas managers have other interests that may conflict with those of shareholders.  Such conflict between the interests of owners and managers leads to agency problems, which are attributed as one of the causes of firm performance declining.  Berle and Means (1932), Fama (1980), Fama and Jensen (1983), Blair (1995) and Shleifer and Vishny (1997) suggest that these agency problems can be mitigated through the process of effective monitoring by concentrated shareholders (or controlling shareholders).  Whilst, Jensen and Meckling (1976) argue that the holding of shares by managers (or so-called managerial ownership) in a firm can induce managers to maximize firm performance and shareholders’ benefit.  The important implication of ownership structure for firm performance is not only limited to concentrate (or controlling) ownership, but also extends to the identity of ownership.  Thomsen and Pedersen (2000, p. 689) suggest “whereas ownership concentration measures the power of shareholders to influence managers the identity of the owners has implications for their objectives and the way they exercise their power, and this reflected in company strategy with regard to profit goals, dividends, capital structure, and growth rates”.  The identity of owners in each country, however, is different. Blair (1995), and Shleifer and Vishny (1997) suggest that the different identity of concentrated shareholders have different monitoring skills and incentives in monitoring firm’s management or even different objectives that may have influence on firm performance and shareholders’ wealth.  Alternatively, Jensen and Meckling (1976), Kim et al. (1988), Oswald and Jahera (1991) suggest that when management personnel hold a proportion of shares in the firm (managerial ownership), the interest of shareholders and managers are aligned.  This results in the decreasing of the agency problems and hence the increase in firm performance.  Accordingly, Morck et al. (1988), McConnell (1990), Wong and Yek (1991), and Short and Keasy (1999) argue that there is an opposite relationship behind the assumption of this linear relationship between managerial ownership and firm performance.  That is, managerial shareholders do not always encourage positive firm performance.  They suggest that at a certain level of shareholding managerial shareholders entrench their power and derive private benefits from their control of the firm, rather than those associated with maximizing the firm’s performance.  Recently, several studies have examined the impact of ownership structure on firm performance, but the results are not clear-cut.  Accordingly, the majority of previous studies that examine this relationship have been conducted as a case in developed countries, such as the UK and the US, for which its ownership structure is typically different to that of the case in developing countries.  In the developed countries, ownership concentration is very low and the legal protection of minority shareholders is relatively strong.  This is unlike the situation in developing countries, such as Thailand, where ownership is highly concentrated with less protection of minority shareholders, and the identity shareholders, particularly family shareholders is intensively important.  In Thailand, the majority of shareholders are family shareholders, who commonly control most Thai firms.  Family shareholders have an important role in these firms and they are also associated with a double role as managerial owners (Wiwattanakantung, 2001).  Recently, there have been some studies that argue that the ownership structure of Thai firms, particularly family ownership, is one of the causes of declining firm performance (for example, profitability) that entirely discouraged confidence of both domestic and foreign investors in investing in the Thai stock market.  Such problems spark an intensive debate, which attempts to account for the effect of Thai ownership structure, especially family ownership, on firm performance in Thailand.

 

Comparison of Knowledge Management and CMM/CMMI Implementation

Dr. Sam Ramanujan, Central Missouri State University, Warrensburg, MO

Dr. Someswar Kesh, Central Missouri State University, Warrensburg, MO

 

ABSTRACT

As software project’s deadlines have been missed, budgets grossly overspent, and resources not adequately used to its full potential, the need for a structure or model to assist in implementation have become very apparent. CMM and CMMI were developed to address these situations. In addition to structuring tasks, organizations face a daunting task of organizing and maintaining the knowledge that exists within them. Such an effort is essential for organizations to gain leverage from their knowledgebase. The process used to organize and maintain the knowledge base is aptly called Knowledge Management. In this study we highlight the symbiotic relationship between Knowledge Management and CMM/CMMI implementation in organizations. In a hyper competitive environment like software development industry, knowledge-based theory says that the possession of the knowledge and using it efficiently provides a sustainable competitive advantage. Innovation, the source of sustained advantage for most companies depends upon the individual and collective expertise of employees. Some of this expertise is captured and codified in software, hardware, and processes. Yet tacit knowledge also underlies many capabilities – a fact driven home to some companies in the wake of aggressive downsizing, when undervalued knowledge walked out the door! [1]. Knowledge management is an emerging discipline that promises to capitalize on organizations’ intellectual capital. The concept of taming knowledge and putting it to work is not new; phrases containing the word knowledge, such as knowledge bases and knowledge engineering, existed before KM became popularized. Software engineers have engaged in KM-related activities aimed at learning, capturing, and reusing experience, even though they were not using the phrase “knowledge management.” KM is unique because it focuses on the individual as an expert and as the bearer of important knowledge that he or she can systematically share with an organization. KM supports not only the know-how of a company, but also the know-where, know-who, know-what, know-when, and know-why.  The Capability Maturity Model (CMM) for Software and Capability Maturity Model Integrated (CMMI) describes the principles and practices underlying software process maturity and helps organization have visible ongoing processes, which have very well defined steps. In mature organizations it is possible to measure the process and product quality [1][2].  We believe that the even though knowledge management and capability maturity model studies are different approaches for the attaining sustained competitive advantage, there is a symbiotic relationship between the two implementations. Understanding the relationships between the two processes will help us implement both these processes more efficiently.  In this paper, we first introduce CMM and CMMI concepts. This is followed by a discussion of Knowledge Management fundamentals. We then provide a comparison of CMM/CMMI and KM implementation. Finally, the conclusions and business implications of this study are presented.  The Software Engineering Institute (SEI) was established by the government in 1984 to address the Department of Defense’s (DoD) need for improved software and to define standards for software development. The government has always been a major purchaser of software and has had to deal with poor software, missed schedules, and high costs. SEI developed the Capability Maturity Model (CMM) in an effort to provide the government with a tool for gauging how well a contractor’s processes are defined. The SEI CMM is a five-level model that attempts to quantify a software organization’s capability to consistently and predictably produce high-quality software products. "The model is designed so that capabilities at lower stages provide progressively stronger foundations for higher stages. Each development stage or ‘maturity level’ distinguishes an organization’s software process capability."[3] Key process areas (KPA) are identified for each maturity level. "When an organization collectively performs the activities defined by the KPAs, it can achieve goals considered important for enhancing process capability."[7]  In order to improve its software process, an organization can initiate a software process assessment (SPA). This process involves 6 to 8 senior managers of the organization and one or two coaches from the SEI or SEI-licensed assessment vendor. 

 

Selecting Consumer Oriented Alliance Partner to Assure Customer Satisfaction in International Markets

Feng-Chuan Pan, Tajen Institute of Technology and I-Shou University, Taiwan

 

ABSTRACT

Many firms enter into international strategic alliance with a wish to strengthen their competitive advantages in international markets However, efforts that lack customer perceived value or deviate from customer expectation. has driven many firms to the hell of failure. Selecting proper international strategic alliance partner has been view as critical factor to the success of cross-border cooperation. While the consumer is the center of marketing and many functional operations, consumers are ignored in most international strategic alliance researches. This paper argues that distinct from partner-related factors and task-related factors; consumer-related factors shall be the most important criteria in selecting partner prior to adopt other factors into consideration. This paper presents the importance of the role of consumer in business activities, and the necessity of involving consumer-related factors in the decision of international strategic alliance, particularly partner selection. Base on several international strategic alliance theories, this paper develop several propositions for further empirical studies. To obtain competitive advantages, increasing number of firms is now inevitably forced to compete in multiple markets (Jayachandran and Varadarajan 1999) where are shared the same competitive space with all sizes of competitors (Etemad et al. 2001). In this highly competitive environment, it is unlikely for individual companies working alone without using external resources owned by other organizations (Oliver 1990) to sustain sufficient resources for continuous growth and survival. It is important to effectively integrate appropriate alliance partner’s resources and capabilities. While the consumers are viewed as the center of marketing activities and as the key factor of the market-based assets, and there are several scholars advocate to conduct management study with ‘consumer orientation’ (for example, Brief 2000, Brief and Bazerman 2003), there is few or none specifically focus on the consumer as the critical criteria for selecting alliance partners. Consequently, many firms are now engaged in ‘management myopia’ (Brief and Bazerman 2003). This paper reveals the importance of consumers in the partner selection process of international strategic alliance. Since consumer behaviors are greatly shaped by the culture they dwelled, selecting partners who have sufficient and relevant customer knowledge and are capable of satisfying customers has significant contribution to the success of international cooperation. The concept is shown as figure 1. Inter-organizational relationship is a valuable asset to the organization (Madhok and Tallman 1998), and continuously acts as an important strategic tool for firms to strengthen their competences, leading to global competitive advantages in either the domestic or international market through global appearance (Gupta and Govindarajan 2001; Govindarajan and Gupta 2001).  Although there are several reports suggesting a high level of failure of strategic alliance (e.g. Parkhe 1991, Dodgson 1993, Hennart et. al. 1998, Inkpen and Ross 2001, Pearce 1997), interorganizational collaboration remains an important strategic tool for firms since ISA can help to broaden firms’ knowledge base and decrease inertia (Vermeulen and Barkema, 2001), can combine resources and capabilities to acquire mutual and sustainable competitive advantage in both domestic and international markets (Harbison and Pekar 1998, Parkhe 1993). In the other hand, ISA are often formed between countries that are having cultural distance from which may provide firms great opportunity in accessing target’s diverse set of concept (Morosini, Shane, and Singh, 1998). Along with the increasing cost of R and D, it is not uncommon to find cooperation among competitors in several industries for the purposes of sharing risks and costs. Cross-border alliance effectively extends and broadens the business and resource scope into global market by which offering firms richer bank, particularly for those small and medium enterprises that are generally lacking sufficient resources and economies of scale.

 

The Relationship between Self-Directed Learning Readiness and Organizational Effectiveness

Dr. Min-Huei Chien, The Overseas Chinese Institute of Technology, Taiwan

 

ABSTRACT

The purpose of this study was to investigate the relationship between readiness for self-directed learning and organizational effectiveness.  The hypotheses that guided this investigation related to the relationship between readiness for self-directed learning and organizational effectiveness in several companies in Taiwan.  The results of the study showed significant relationship between SDLRS and organizational effectiveness. Recommendations suggested that manager should help employees become ready for self-directed learning in order to improve organizational effectiveness.  In the 21st century--the Knowledge Age--corporations will see workers as intellectual capital. Workers themselves, rather than just information, will become the resources that allow organizations to respond quickly and effectively to rapid change. Learning is at the core of these demands--whether it's learning a new skill, knowing how to manage existing and new knowledge, or creating organizational structures that support continuous learning. This study introduces learners to a new focus on performance improvement based on knowledge as the competitive advantage. Self-directed learning is the foundation for the Knowledge Age. Well-conceived implementation of self-directed learning is crucial for the success of learning organizations in the 21st century.  Self-directed learning is really mean to every organization, especially in knowledge worker age in the 21st century.  Understand how to manage organization's support for self-directed learning become key factor for effectiveness. Thus, this study was tried to find out the questions below: (1) Recognize the importance of self-directed learning.  (2) Identify the most important aspect of most definitions of self-directed learning. (3) Identify the advantages of self-directed learning for the 21st century organization. (4) Identify roles trainers can play in self-directed learning.  The implementation of successful programs for learner has been shown to have a positive economic impact on businesses. There are four hypotheses being tested are: H1: Environment setting is positively associated with readiness of self-directed learning. H2: Manager’s attitudes are positively associated with learning satisfaction. H3: Organizational culture is positively associated with Readiness of self-directed learning. H4: Readiness of self-directed learning is positively associated with organizational effectiveness. In the new Knowledge Age, the only successful organizations will be those that know how to gather, support, and manage knowledge. Manager or trainer who wants to improve performance, they need support from the corporate culture and environment setting. This study was to discover what factors make up a learning organization, how to assess whether your organization has them, how to train leaders to support them, and how to create them if they're missing. Self-Directed LearningSelf-directed training includes the learner initiating the learning, making the decisions about what training and development experiences will occur, and how. The learner selects and carries out their own learning goals, objectives, methods and means to verify that the goals were met. The most commonly accepted definitions of self-teaching (Tough, 1976) and of self-directed learning (Knowles, 1975) emphasize the fact of the learner’s control over the planning and execution of learning.  John Dewey said, “Education and learning is the matter of lifelong process” (Dewey, 1938). The works of Brookfield (1980) and Thiel (1984) form another point of view that is related to the field-independence and field-dependence constructs.  This approach added attitude to our understanding of self-directed education.  Even (1982) went on to point out that adult education philosophy appears to favor the self-directedness implicit in the field-independent learning styles.  In our daily life, most learning is informal and self-directed in nature. We buy a book and think about the writer's viewpoint. We attend a presentation given at a local school. We take some time at the end of the day to think about our day and what we learned from it. These are all informal forms of self-directed learning. Self-directed learning is one of the best and most efficacious ways of learning in the lifelong learning process.  Sound and valid organizational support can make self-directed learning go more smoothly and successfully (Kidd, 1973). We all know that creating “self-directed learners” will improve the quality of democratic participation, and ultimately the quality of life, because self-directed learning must inevitably produce more self-determining citizens. According to Candy (1991), Self-directed learning is viewed as one of the most common ways in which adults pursue learning throughout their life span.  People supplement and at times substitute self-directed learning for learning received in formal settings.  On the other hand, one of lifelong learning’s principles is to equip people with the skills and competencies necessary to continue their own “self-education” beyond the completion of formal schooling.  Self-directed learning, then, is seen as simultaneously a means and an end to lifelong education (Candy, 1991).  In 1971 Allen Tough was among the first to research the areas of self-directed learning and the learner.  Tough examined the idea of what he named Adult Learning Projects.  His findings suggested that approximately 90 percent of all adults conduct at least one major learning effort learning project each year.  His findings also suggested that the average person conduct five to seven separate learning projects in one year.  Those are found in five distinct areas of knowledge, skill, or personal change. He also found that a person spends an average of one hundred hours per learning effort in a year which adds up to a total of five hundred hours in all of his or her efforts in the year.  This represents an average of almost ten hours a week (Tough, 1971). 

 

A Study to Improve Organizational Performance: A View from SHRM

Dr. Min-Huei Chien, The Overseas Chinese Institute of Technology, Taiwan

 

ABSTRACT

This research studies how to improve organizational performance from the point of strategic human resource management. The research method adopted was the case study of the qualitative research and the data was collected by in-depth interviews. In the process of the research, the author interviewed with fifty employees, including twenty mangers and thirty workers.  According to the analysis of the research data, there are five factors effecting organizational performance: (1) Model of motive. (2) Leadership styles. (3) Organizational culture and environment. (4) Job design. (5) Human resource policies.  According to the results, the study point out some feasible suggestions on the administrative policy and management. The results of this study could be helpful to the management effectiveness practices and the construction of quality management model in disciplinary study. Strategic human resource management is concerned with creating a competitive advantage for organizations by closely aligning human resource processes, such as recruitment, selection, training, appraisal, and reward systems (Fornbrum, 1984). Research also indicated that top performance increasingly demands excellence in all areas, including leadership, productivity, and adaptation to change, process improvement, and capability enhancement (knowledge, skills, abilities, and competencies). Wright and McMahan (1992) agree and suggest that HRM can become a competitive advantage for organizations in terms of improving organizational performance if it more closely aligns its practices with strategic management efforts. (Porter, 1985).  It is no doubt, that improved organizational performance is the only way to lead to successful business. But there are so much different way to improve organizational performance. According to research, there are some directions to improve organizational performance. Typical organizational performance projects include: Process mapping and measurement; Process improvement; Expert facilitation of internal interventions; Productivity improvement; Monitoring and evaluation; Measuring and assessing climate and culture; Improving communication processes; Team building and team effectiveness improvement; Cohering management teams and Rationalizing the complexities of organizational structure (Chien 2003). This study only focuses on the view of SHRM.  Human capital is the highest potential of value for the organization. Most business finds themselves struggling to understand how to build a better HRM model for improving organizational performance especially in 21 century. The primary research question driving this study is: How does an organization develop an organization capabilities strategy from human resource capital perspective?  Strategic human resource management: Strategic human resource management (SHRM) define as "the pattern of planned human resource deployments and activities intended to enable an organization to achieve its goals" (Wright and McMahan , 1992, p. 298).  Performance is one of the key terms of modern organization.. “Performance” has, however, quite different meanings: From a process view, performance means the transformation of inputs into outputs for achieving certain outcomes. With regard to its content, performance informs about the relation between minimal and effective cost (economy), between effective cost and realized output (efficiency) and between output and achieved outcome (effectiveness). Thus, performance is equivalent to the famous 3 Es(economy, efficiency and effectiveness) of a certain activity or program (Javier Font 2002). Some research point out that successful organization should have something out standard. According to Morley (1990), he has developed some principles of successful organization in doing total development work. Those elements are list below: 1. Select cohesive teams based on sentiments of mutual liking and respect for each other’s expertise. 2. Organize controlled convergence to solutions that everyone understands and everyone accepts. 3. Organize vigilant information processing and encourage actively open-minded thinking. 4. Avoid the facile, premature consensus.  5. Maintain the best balance between individual and group work. 6. Initial generation of new concepts. Leadership means that a person can influence others to act in a certain way. The employee may need at times to influence his work group and to provide a vision of what the organization as a whole or the specific task at hand requires. Leadership skills are necessary at every level of the enterprise from chief executive to the line worker. Organizational effectiveness skills are the building blocks for leadership. Without them, leadership can be misplaced or even be counterproductive.  Education and training are the other important factors for organizational performance. In fact, training must be tied to the enterprise's strategic business requirements and maintain the organization's core competencies in every field at every level. Opportunities for lifelong learning should be provided to all levels of employees which will promote organizational performance directly (Chien, 2003). Learning becomes a habitual activity instead of an occasional event.

 

Project Performance: Implications of Personality Preferences and Double Loop Learning

Dr. Karla M. Back, University of Houston, Houston, TX

Dr. Robert Seaker, University of Houston, Houston, TX

 

ABSTRACT

The process of managing and implementing a project is typically more dynamic than what an initial project plan would indicate. Projects exist within organizational as well as economic contexts designed and are executed by human beings.  When such influences are considered collectively, chances are that a successful project will have required a series of reassessments and adjustments throughout the project’s duration. Given the nature of most project environments, what allows a project to meet its business and performance objectives in the most cost effective and timely manner?  A proposed theory contends that as organizational and external environments become more complex, projects must evolve to be more organic in nature.  This is accomplished by building a team that practices, incorporates, and nurtures double loop learning – a phenomenon that refers to an individual’s capability and propensity for challenging accepted rules and parameters that decisions or actions face.  It is also postulated that the tendency toward thinking and behaving in this way is correlated with certain aspects of an individual’s personality.  Therefore, project success may actually be determined at the time individuals are selected for the project team – that is, prior to any formal planning and implementation of tasks.  It is common that decision-making environments allow for re-assessments and adjustments.  Executives such as CEOs and marketing managers change directions all the time.  If rational and not ad hoc in nature, it is reasonable that strategies, operating objectives, and resources continue to be made relevant for the sake of driving shareholder value.  Mintzberg, Raisinghani, and Theoret’s (1976) strategy process model perhaps most accurately captures the decision dynamics within the organization where the occurrence of on-going strategizing is influenced by continuous changes in the environment and decision makers.  Project management, however, is typically not allowed such leniency.  Goals of a project are presented in concrete terms.  They are discreet, “contracted” deliverables to be attained through the most effective means possible given defined constraints.  This involves clearly defined plans that include budgets, resources, and behavioral parameters.  Tasks and roles are identified and the project is implemented through continuous monitoring and adherence to the plan.  The process of managing and implementing projects is typically more dynamic than what an initial project plan would indicate. Projects exist within organizational as well as economic contexts.  In addition, projects are designed and executed by human beings.  When such influences are collectively considered, chances are that a project will eventually ends up morphing into something quite different than originally planned.  A project may be long and complex.  To illustrate, a new sales order management system that spans multiple business and geographic units is initially expected to take twelve months to implement.  Success depends on the timely design, integration, implementation, and testing of basic systems and processes related to customer interfaces, data capturing and warehousing, visibility of product availability and delivery lead times, account receivables and credits, financial and sales reporting, and so on.  In addition, the company resides in the highly competitive and dynamic industry of, say, apparel manufacturing, which makes this project even more critical and high profile.  A project team is assembled and, logically, it is composed of a project manager, subject matter experts, and process reengineers. Preliminary planning commences so that goals and timelines are aligned with expectations.  This is then followed by detailed implementation plans that include the appropriate Gantt charts, Work Breakdown Structures (WBS), and other project management tools.  Upper management is briefed on the tasks, milestones, and timelines.  At this point everyone is confident and focused, particularly since a structured plan has been developed that further clarifies expectations.  To effectively and efficiently achieve objectives, however, a variety of visible and latent influences must be confronted throughout the life of the project.  If left alone, they can adversely alter a project.  It can be that somewhere along a planned course of action that a project’s scope is formally altered because initial expectations were seen as being unrealistic or pressures for additional functionality result in “scope creep.”  It may also be that the strong commitment from upper management has waned due to newly emerging strategic priorities they face.  Resource and capability issues may be in the form of new technologies that change many of the requirements.  Technical roadblocks, such as missing data or inadequate software, can also make it difficult to deliver projects as planned.  Also, job turnover, reorganizations, downsizing, and, in general, the evolution of people’s roles and commitments over time all contribute to project instability.  Within the team itself, there are added dynamics.  Roadblocks may be traced to a lack of a particular type of expertise or to an insufficient level of formal or referent power needed to drive cooperation.

 

The Dollar Value of Improved Customer-Oriented Retail Sales Personnel

Dr. Edward Kemery, University of Baltimore, MD

Dr. Gene Milbourn, University of Baltimore, MD

 

ABSTRACT

It is argued that by failing for focus on the customer orientation of its sales clerks, an organization could place itself at a competitive disadvantage.  A study was conducted which found that a self-report measure of trait hostility correlated with supervisory ratings of job performance of sales clerks.   Although the magnitude of the obtained correlation was low (r=. 16), utility analysis demonstrated how even a selection instrument of such “modest” validity can produce a significant bottom-line payoff to an organization in some instances. Obtained findings also argue for continued investigations of theoretically meaningful personality-performance hypotheses. There are several options available for increasing the level of employees’ customer orientation.  One strategy is to hire job candidates who are likely to interact with customers in positive ways.  This strategy involves incorporating predictors of customer oriented selling (COS) into the sales personnel selection system.  A second strategy is to provide training to teach job specific consumer-oriented behavior.  A sales clerk position involves more than job customer contact.  Along with sales duties, an employee is expected to run a register, keep track of inventory, and maintain displays.  With these varied responsibilities, an employee’s immediate workload may make it difficult to respond to customers’ needs in a timely or tactful manner.  However, for effective customer relations, a high level of COS should be maintained despite current workload demands.  It follows that training programs for new hirees, as well as periodic refresher course for incumbents, should emphasize COS, particularly in “difficult” situations.  A third strategy is making COS an explicit component of employee evaluation.  That is, periodic supervisory evaluations targeting COS would provide feedback to employees and could serve as input into salary or other employment decisions. Each of these strategies assumes, however, that COS behaviors have been identified.  The personnel selection strategy assumes that pre-employment assessments (test, interviews) measure factors that are related to the likelihood of an employee exhibiting positive COS; training involves the identification and practice of COS behaviors; and, behavior-based performance evaluations are predicated on understanding the behavioral components of COS.  The utility of a human resource management (HRM) decision tool (e.g., a personnel selection system) is defined as the gain an organization accrues from implementing it.  Early utility analysis (Taylor and Russell, 1939) focused on the increase in decision accuracy (defined as proportion of hirees who were successful) attributed to a personnel selection system.  The utility of the selection system was gauged as the improvement in decision accuracy over the base rate (the proportion of individuals who are successful without the selection system).  For example, if the base rate was .40 and the selection system produced a decision accuracy of .45 (45 percent of hirees are successful), then the utility of the selection system is .05.  Two additional factors, the validity coefficient (the correlation between a selection system and job performance) and the selection ratio (the proportion of the applicant pool hired), must also be considered.  For a particular validity coefficient, the utility of a selection system is greater when the selection ratio is low.  Taylor and Russell (1939) have provided a series of tables with which the utility of a selection system may be estimated.  An example of this type of analysis is found in the first three columns of  Table 1.  The analysis assumes that the baserate is .30 and the selection ration is .25.  Column one contains a range of validity coefficients from .05 to .50; Column two contains the proportion of successful hirees; and, column three lists the proportion increase in successful hirees (utility) at each level of validity. Two points from Table 1 are worth mentioning.   As expected, both decision accuracy and utility are an increasing function of validity.  Thus, decision accuracy and utility can be enhanced with efforts toward increasing the relationship between selection instruments and job performance.  The second point, and one that is extremely important is that even a selection system with extremely modest validity may have some utility.  As indicated in Table 1, when validity is as small as .05 (and the selection ratio is .25 and the baserate is .30), decision accuracy will be enhanced.  This is somewhat counterintuitive, because it is tempting to dismiss a meager validity coefficient (e.g., .05) as being of little value because it accounts for such a small proportion of criterion variance (e.g. .025).  However, as shown in Table 1, this is inaccurate (cf. O’Grady, 1982).  In situations when many persons are hired, such a large department store chain, even a modest gain in decision accuracy will be beneficial.  The utility analysis developed by Taylor and Russell (1939) assumes that employees are classified as either successful or not, thereby ignoring differences in job performance of successful employees.  Cronbach and Gleser (1965) have provided a way of estimating the utility (in dollar terms) of an HRM program such as a new personnel selection system, training, and job redesign (Schmidt, Hunter, and Pearlman, 1982).  The utility U, defined as the payoff resulting from a selection procedure is: Where N is the number of employees selected, T is the expected tenure of the selected group, dt is the difference in job performance between the pre-selection group and the selected group, Sdy is the standard deviation of performance (in dollars) of the selected group, rxy is the validity of the selection system, and C is the cost of the selection system.  Suppose that the validity of a selection system is .20, an organization hires 300 sales clerks, the selection ratio is .25, the baserate is .30, the expected tenure of sales clerks is two years, and it costs the organization $50,000 to develop and implement a personnel selection program.  From Tale 1 it can be seen that the average increase in sales expected is nearly one-fourth standard deviation (a 25% increase in performance).  Thus, Equation 1 becomes:

 

Gaining a Competitive Advantage from Advertising: Study on Children's Understanding of TV Advertising

Dr. Ali Khatibi, Multimedia University, Cyberjaya- Malaysia

Dr. Ahasanul Haque, Multimedia University, Cyberjaya- Malaysia

Dr. Hishamudin Ismail, Multimedia University, Cyberjaya- Malaysia

 

ABSTRACT

For many years, TV advertisers have produced commercials that are designed to attract and hold the attention of children of all ages. As a result, there has been increasing controversy regarding whether these commercials are fair since they are intended to persuade children who are not mature enough to critically evaluate the messages presented. In this study, verbal and non-verbal measurements were used to investigate whether age, gender and parental influence have an effect on the understanding of TV advertising. The study would measure two components of understanding TV advertising: the recognition of the difference between programs and commercials and the comprehension of advertising intent. ANOVA analyses were performed to assess the effect on age, gender, parent-child interaction and parental control of TV viewing: one for each measure of understanding of TV advertising. In addition to determine among which groups the true differences lie, other test was conducted. The Least Significant Difference (LSD) method was performed for the purpose.. Research found that majority of children aged between five and eight have some understanding of TV advertising, they are capable in differentiate program and commercials especially if this understanding is measured by non-verbal rather than verbal measure. However, the results based on verbal measures are not as conclusive. The findings also indicated that child's age has a substantial positive effect on the child's understanding of TV advertising. This effect pronounced for verbal measure of comprehension intent for advertisements.  Results also showed a small but significant negative effect of parental control of TV viewing, in which a high control of TV viewing result in a relatively low understanding of TV advertising.  TV advertisers have produced commercials that are designed to attract and hold the attention of all age’s children from many years. As a result, increasing controversy whether these commercials are fair since they are intended to persuade children who are not mature enough to critically evaluate the messages presented. Many opponents of child-directed advertising, however, believe that commercials aimed at young children can have a profound impact on their beliefs, values and norms (Moschis, G, 1987). Critics fear that children, more than adult, are susceptible to the seductive influences of commercials because they do not have the necessary cognitive skills to protect themselves against the attractive and cleverly put advertising messages (Brucks et al., 1998).  Moreover, many parents have expressed frustration over the persuasive power of TV commercial, and they find it difficult to deal with their children’s repeated requests for food and toys they have seen advertised. In addition, it has been found that as children mature they tend to become increasingly skeptical about commercials.  In general, however, children are less able than adults to understand fully the difference between programs and commercials and to comprehend the selling intent of advertising. Hence, TV advertising directed towards children commands special attention from parents and policy maker. The purpose of this study is to assess children’s understanding of TV advertising, as decomposed into: a) the effect of age, gender and parental influence on understanding of TV advertising; b) their recognition of the different between programs and commercials; and c) their comprehension of the selling intent of commercials. Children's advertising literature has clearly demonstrated over the years that children process and reaction to television advertising differently than adults. For example, children often have difficulty separating advertising from program content (Buijzen, M., and Valkenburg P. M., 2000). In addition, theirs' limited vocabularies and language skills as well as their underdeveloped cognitive abilities hinder their understanding of messages designed for a more mature television audience (Brucks M. et al., 1998).  The truthfulness of the statement that children think different than adults is fairly self-evident. Any adult who has spent even passing time with a schooler or elementary school child recognizes that young children are not general and miniature adults. The young child (those below middle childhood or about age seven) in particular has a qualitatively limited manner of perceiving, thinking and ways of interaction with other people, when measured by adult-standards. Attempts to describe how children grow and develop in the way they think about and interact with their physical and social world has led to the emergence of cognitive development theories. The most famous cognitive development theory, and the one which has received the most research attention, is that of the Swiss psychologist Jean Piaget (John and Ratan, R. L., 1986). 

 

Study of the Relationship between Perception of Value and Price and Customer Satisfaction: The Case of Malaysian Telecommunications Industry

Dr. Hishamudi Ismail, Multimedia University- Cyberjaya- Malaysia

Dr. Ali Khatibi, Multimedia University- Cyberjaya- Malaysia

 

ABSTRACT

The objective of this study is to examine the co-relationship between customer satisfaction, service quality and perception on value for leased line service in Malaysia telecommunication industry.   In conducting the survey, the authors distributed the questionnaire to 245 respondents by using three data collection techniques i.e. personal interview, telephone interview and mail survey.  Findings indicate that there is a relationship between the customer satisfaction and service value. The empirical findings in this study also indicate there is a significant relationship between overall customer satisfaction level and overall quality of service and the tested variable i.e. perception on the current price and the perception on the current value.  In this study, the authors suggest that in order to increase value of service, enhancement should be concentrated more on the service quality aspect rather than customer satisfaction. Malaysia welcomes the advent of the Information Age. In this era, information can flow easily and freely regardless of distance and territorial boundaries. This will promise the world the most cost-effective and liberal way of sending information, ideas, people, goods and services across borders. In view of this, the government had embarked on a project called the Multimedia Super Corridor or MSC. The prime objective is to help Malaysian companies to test the limits of technology and prepare themselves for the future. The MSC will expedite Malaysia's entry into the Information Age, and will also help to actualise Malaysia’s Vision 2020. This corridor will bring together an integrated environment with all the unique elements and attributes necessary to create the perfect global multimedia climate.  MSC is not standing on it own. It is supported by a high-capacity, digital telecommunications infrastructure designed to the highest international standards in capacity and reliability. To ensure success, the Malaysian government had put a very high standard on the MSC infrastructure that will make it the most superior multimedia environment - first ever in the world. Perhaps, on the customer side, digital leased line will become important service that would link them to the MSC environment. In view of this, it is certain that the telecommunication industry will play an important role to ensure the success of the Multimedia Super Corridor. This will definitely create a big challenge to all telecommunication operators in Malaysia.  The perception of value plays a very significant role in determining customer satisfaction especially in marketing of a service. The value concept appears quite frequently, but any clear definition cannot be found until we turn to the literature on pricing. Monroe (1991) defines customer-perceived value as the ratio between perceived benefits and perceived sacrifice. Few studies have investigated the relationship that exists in the service industry between customer satisfaction, service quality and perceived customer value. Bolton and Drew (1991) found that service quality and satisfaction/dissatisfaction experiences were the most important determinant of value. They also noticed that value was positively related to customer loyalty. Hence, the aim of this study is to examine the co-relationship between customer satisfaction, service quality and perception on value for leased line service in Malaysia telecommunication industry. Generally, it is agreed that the meaning of value is very difficult to defined. Value is an abstract concept with meaning that varies according to context. For example, economists equate value with utility or desirability.  The social scientists understand value in the context of human values such as the instrumental and terminal values as suggested by Rokeach (1973). The engineers on the other hand, perceive value as any process designed to reduce cost while maintaining the existing standards. In marketing, the meaning of value is quite similar to the notion of quality which is typically defined from the customer’s perspective. Sawyer and Dickson (1984) conceptualized the meaning of value as a comparison between ‘weighted get attributes’ and ‘give attributes’. H

 

How is Market Efficiency Disappeared? Comparing the Opening Position and Closing Position Simulation Results

Ching-Wen Lin, Takming Institute of Technology, Taipei, Taiwan

Dr. Kuang-Hsun Shih, Chinese Culture University, Taipei, Taiwan

Dr. Shaio Yan Huang, Providence University, Taichung Hsien, Taiwan

 

ABSTRACT

This study intends to examine the market efficiency on the Taiwan Electronic Index (TEI) in terms of predictability of Neural Networks, and also attempts to compare the simulation results of using opening position and closing position. The neural trading system informs 14-15 transaction opportunities based on domestic and international information in the testing period. This investigation suggests that using opening position may reflect superior trading information, and that this information is diluted with the passage of time. This study intends to examine the market efficiency on the Taiwan Electronic Index (TEI) in terms of predictability of Neural Networks, and also attempts to compare the differences of using opening position and closing position to simulate TEI trading.  Fama (1970) initially defined an efficient market as one in which prices always fully reflect the available information. Scholars such as Rubinstein (1975), Cornell and Roll (1981), and Brennan and Copeland (1988) extended the concept of market efficiency and tried to explain it from several different viewpoints, such as the costs of intermediaries and information costs. In a defined efficient market, the only way to earn positive profits consistently is to develop competitive advantages; in which case profits may be viewed as the economic rents that accrue to this advantage.  A vast amount of literature has been devoted to the application of Artificial Neural Networks (ANN) in the finance and investment fields. For example, Dutta and Shekhar (1988) applied neural network technology to bond rating evaluation.  Malliaris, and Salchenberger (1994) applied ANN to predict option volatility.  Chiang et al. (1996) apply a back-propagation algorism to forecast the performance of U.S. mutual funds.  Darrat and Zhong (2000) applied a neural algorism model to investigate market efficiency using daily data from two Chinese stock exchanges. Phua et al. (2001) used genetic algorithms to predict the Straits Times Index of the Stock Exchange of Singapore. However, previous studies focus on the comparison of ANN predictability among other methodologies. The investment society desires more information on the usage of training results, and how to use the results to construct a trading strategy.  The training sample of this study covers the period from January 1, 1995 through December 31, 2001, including more than approximately 2000 daily observations per series. The training series including trading shares and the trading volume of the TAIEX and the past returns of the TAIEX, TEI and four major U.S. indices. The neural trading system will analyse trading information from both domestic and international markets to determine the future performance of the TEI. The best learning model was utilized to predict the daily rates of return from January 1, 2001 through December 31, 2001.  We may use the following equation to calculate the daily return prediction: The structure of this paper is organized in the following ways. The second section introduces the neural algorisms and the neural trading system. Section three refers to the database and research methodology being used. The following section presents the overall findings and the final section offers conclusions and considers implications we may apply to the investment decision-making process.  West (2000) compares five neural network models including multilayer perceptron, mixture-of-experts, radial basis function, learning vector quantization, and fuzzy adaptive resonance to investigate the credit scoring accuracy. The results demonstrate the multilayer perceptron may not be the most accurate neural network model. Although the universal approximation property of neural network seems attractive at first sight, their intrinsically black-box nature has prevented them from being successfully applied in management science setting. The insights of neural algorithms need to be further explored and explained.  Baesens et al. (2003) report on the use of neural network rule extraction techniques to build intelligent and explanatory credit risk evaluation systems. The study reports three popular neural network rule extraction techniques. First of all, the Neurorule, which suggested by Setiono and Liu (1996); secondly, Trepan, which developed by Craven and Shavlik (1996), and finally, the Nefclass, which introduced by Nauck (2000).  It is concluded that Neurorule and Trepan are able to extract very compact rule sets and trees for all data sets. The prepositional rules inferred by Neurorule are especially concise and very comprehensible.

 

Business School Curriculum: Can we learn from Quantum Physics?

Dr. Steven Tippins, ARM, Roosevelt University, Schaumburg, IL

 

ABSTRACT

This paper explores the development of the curriculum within business schools.  It posits that changes may be needed and suggests that the work within areas such as quantum physics may be a place to start the reevaluation. If you peruse the undergraduate catalogue at many colleges and universities striking similarities appear.  Beyond the nice pictures of students enjoying student life or having a meaningful conversation with a professor, the real similarities begin to reveal themselves when one looks at the curriculum.  Of the approximately 120 credits that a student must take to graduate why do most schools require the same courses?  Is business such a science that we know exactly what must be taught, how it should be broken up, and in what order?  One of the duties within the realm of academia is service.  It is not uncommon for a curriculum committee (or any variation of that name) to periodically look at the curriculum to see if it is appropriate for the student body in question.  Whether this task is internally driven by a desire to provide a good product or externally driven by the need to comply with accreditation guidelines, analysis such as this is good.  To paraphrase, an unexamined curriculum is not worth offering.  Many times one of the first questions that arises when curriculum review is broached is a form of “what is everyone else doing?”  Whether it is a formal survey of competitors or copying top programs, business school curriculum development can tend toward incestual.  The curriculum at many, if not most, schools may be exactly what is needed.  However, there may be ways of looking at what is taught and its overarching structure that may be helpful.  Some of the basic tenants that have been developed in the field of quantum physics may be helpful.  Before these concepts are presented a brief history of business school curriculum will be presented.  Much of the discipline that is known as business derived from the field of economics.  Business, to many, is the practical application of economic concepts.  This practical application is where business found its first strength and its first great criticism.  The criticism stemmed from the external perception that business was not a strong academic discipline.  The veracity of the perception is not at question here.  The perception and the responses came through in two major reports that came out almost 40 years ago.  In 1959 both the Carnegie and Ford Foundations (Gordon and Howell, 1959; Pierson, 1959) issued reports analyzing and criticizing the state of collegiate business education.  Among the issues discussed were the lack of control or coordination over general studies courses and the over specialization found within many undergraduate curricula.  The over specialization brought charges of trade school versus collegiate level education.  In response to these two major studies collegiate business education underwent a number of changes.  One of the largest changes was the emphasis placed on academic rigor.  Some of that was achieved through the elimination of majors that did not fit within the broad construct of business while other gains were made through increased emphasis on publishing in rigorous journals.  Another criticism found in both reports and then again 30 years later was the lack of integration.  Out of the two foundation reports came the implementation of a capstone course at most schools.  This was echoed in a study done by Porter and Mckibbin (1988) where they noted that the business world required a multidisciplinary approach to most problems yet most business schools were addressing this requirement with one capstone course.  The authors indicated that this was not sufficient yet this is still the case at most business schools today.  The other major driver of curriculum is the accrediting bodies.  The major business accrediting organization is the American Academy of Collegiate Schools of Business International (AACSB International).  The AACSB International has changed within the last decade to be mission driven from a more prescriptive model.  While curriculum issues fall under the purview of a school’s mission, few changes have been made to curriculum at most schools on the aggregate level.  The current outlines for curriculum are given in Table One.  Standards for Business Accreditation, AACSB International, pp.16-19, Revised February 14, 2001. The standards highlighted in Table One allow all schools to design curriculum according to their unique mission.  It is hard to believe that the curriculum model used by almost every school can meet the differing missions of each school.  Within the guidelines there are no standards pertaining to learning.  There are provisions calling for monitoring of curriculum but if only one model is presented there is nothing for those doing the monitoring to compare.  The Association of Collegiate Business Schools and Programs (ACBSP) is the other business accrediting unit.  Thought of as the teaching alternative, the ACBSP also has curriculum guidelines.  A major part of the curriculum under the ACBSP is the Common Professional Component (CPC).  Table Two presents the CPCs.  Within the guidelines it is stated that the CPCs must be covered in at least two thirds of a 3-semester hour course or the 4-hour quarter equivalent.

 

Ally Strategic Alliance with Consumers? Who Care?

Feng-Chuan Pan, Tajen Institute of Technology and I-Shou University, Taiwan

 

ABSTRACT

While strategic alliance is widely adopted by firms to compete either in domestic or cross-national markets for various advantages, and the customer and consumers are viewed as the center of any marketing activities, there are none or few of researches place significant focus on the role of the customers/ consumers in the decision of strategic alliance. The author conducted a meta research and citation analysis on main academic literatures and found only a tiny fraction of researches involve customers / consumers in alliance related studies whether in domestic or international contexts. The author suggests the need to extensively and directly involve customer / consumer as the center in the research of strategic alliance, so as in real business world, particularly for those international alliances that normally cross over distinct cultures. Before Brief and Bazerman (2003) expressed their expectations to involve consumers in the management studies in its editor’s comments in a highly prestigious journal, Academy of Management Review, customers and consumers have being the center of the marketing activities and the core of missions of almost all profit or non-profit organizations. Compared to other input factors, consumer is the most valuable resource (Duncan and Moriarty, 1998) on which the essential foundation for competitive advantage is building. In response to the fast changing environment, plenty technical-efficient skills and managerial knowledge and practices have been developed and adopted to reduce the operating cost and enhance business efficiency by which assure the firms survival and growth. Due to the increasing costs of R & D, diversity of customer’s requirement around the world, strategic alliance is widely adopted by firms operating in multiple markets. Effective learning from partners in various functions, global appearance (Gupta and Govindarajan 2001; Govindarajan and Gupta 2001), foreign market access (Gerlinger, 1991), and accordingly international market expansion can further be achieved by allying with appropriate cross-border partners, i.e. international strategic alliance.  There are relatively few researches focus on the alliance compare to those studies focus on or at the minimum concern on ‘ consumer’ and ‘customer’, as shown in table 1. Although strategic alliance is widely used in many industries, sample of this research reveals a fact that consumer and customer remains an important issue in multiple studies, at least more important than alliance. Yet, it is surprising that only a fraction of alliance-related literatures links alliance studies with consumer or customer. As indicated in table 1, only 28 articles partially linking consumer or customer with alliance among these 9,597 consumer- or customer- and alliance- related articles. Most of these 28 literatures, or 7 literatures per year in average, were not directly focusing the relationship between strategic alliance and consumer or customers.  Customer is fundamental of valuable market-based assets (Srivastava, Shervani & Fahey, 1998; 1999). Obtaining and retaining long-term profitable consumers / customers is vital to the success of any forms of strategic alliance, no matter how or where the alliances are formed. Customers /consumers is pivotal to any business decisions.  Since the main concern of this paper is to explore the study of consumer or customer in strategic alliance, several literatures that involve customer / consumer and strategic alliance together are reviewed to explore the recent development of the captioned subject, it is then further illustrated in next section.  A citation analysis is further made to those articles related with consumer and strategic alliance respectively to reveal the most influenced literatures, authors, and journals.  This paper presents the fact that consumers had long been ignored by strategic alliance researches. It is hope that this paper may attract more academicians’ and practitioners’ efforts in centering consumer / customer in the studies of strategic alliance. Citation analysis has been used in several business and management disciplines (e.g. Johnson and Podsakoff, 1994; Li and Tsui, 2002; Peng, 2001; Phene and Guisinger, 1998; Tahai and Meyer, 1999), and it is a rather objective index for the influence of certain published papers (Li and Tsui, 2002).  Along with the world step into 21st century by the beginning of year 2001, this paper take the literatures that published between January 1999 and January 2003 in those journals that listed in SSCI (Social Science Citation Index) as sample data of which represent the main stream of theoretical and practical oriented academic researches.

 

Knowledge Management Initiatives: Exploratory Study in Malaysia

Dr. Badruddin A. Rahman, Universiti Utara Malaysia, Malaysia

 

ABSTRACT

A study on Knowledge Management (KM) initiatives was conducted on a sample of various categories of organization in Malaysia. The categories were companies listed in the Kuala Lumpur Stock Exchange (300 of a total of 500 companies), government Ministries and Departments (30), educational institutions (80), small and medium size industries (100), the electronic industries (150) and government-owned agencies (10). About 303 questionnaires were returned and the preliminary findings showed nearly half of the respondents were reporting that they already established formal knowledge management initiatives in their respective organizations. This was evident amongst organizations in the education sector, government own organizations and government departments and/or agencies. Nonetheless, the findings also showed that the Malaysian private sector was slowly catching up to meet the challenges of the competitive business environment.  The key characteristics identified from leading companies that have successfully leverage their assets provide a fertile ground for developing a knowledge management strategy. Companies that want to leverage this asset must approach knowledge management with a focus on their core competencies and tie those in very tightly to the business strategy and vision (Tiwana, 2000). The decades of the last century saw corporations locked in a struggle to out-do one another and in the 21st century will see organizations in a struggle to out-know one another. More than half of the organizations listed in the Fortune 500 in 1993 are no longer in the list today. Even icon names such as Sears and McDonald find themselves in a slump. “What are we doing wrong?” asked some corporate leaders and shareholder. Whilst they are comfortable discussing the management of people, products, financial resources and operations, they are not comfortable when discussing the management of knowledge! But as of today most would realize that knowledge management is a way or concept of doing business that revolves around the following four processes: (1) Gathering: Bringing information and data into the system; (2) Organizing: Associating items subjects, establishing context, making them easier to find; (3) Refining: Adding value by discovering relationships, abstracting, synthesizing, and sharing; and (4) Disseminating: Getting knowledge to the people who can use it.  Knowledge management is crucial because it points the way to comprehensive and clearly understandable management initiatives and procedures. When companies fail to utilize tangible assets, they suffer the economic consequences, and this failure is clearly observable to markets and competitors alike. Although knowledge assets are harder to quantity, they are just as critical for long-term survival and growth of the company. The success in today’s competitive marketplace depends on the quality of knowledge and knowledge processes those organizations apply to key business activities (Housel & Bell, 2001). For example, maximizing the efficiency of the supply chain depends on applying knowledge of diverse areas such as raw materials sources, planning, manufacturing and distribution. Likewise, product development requires knowledge of consumer requirements, recent scientific developments and new technologies and marketing.  Knowledge as the insights, understanding and practical know-how that the individuals possess have two basic definitions of interest. The first is regarding the body of information, which might consist of facts, opinions, ideas, theories, principles and models.  This could also be referred to a person's state of being with respect to some body of information.  Second, knowledge is the major factor that make personal, organizational and societal intelligent behaviour possible.  In this regard, knowledge provides the ability to respond to new, unusual and interesting situations. In simple terms, knowledge is regarding the full utilization of information and data, combined with the people's potential skills, competencies, ideas, intuitions, commitments and motivation and the ability and wisdom to use a pool of information in a way that achieve the objectives of the individual and organization (Tan, 2000).  Knowledge is very complex and come in many forms and types. The most common distinction is that between explicit and tacit knowledge (Nonaka, 1991). Explicit knowledge could be expressed in words and numbers and shared from  data, scientific formulas, product specifications, manuals, universal principles and so forth. Tacit knowledge is highly personal, hard to formalize, difficult to communicate or share with others.  It is something that is not easily visible and expressible, and rooted in an individual's actions, experiences as well as ideas, values or emotions the person embraces.  In fact, there are two dimensions in tacit knowledge -- the technical dimension which consist of informal and difficult skills often captured in terms of 'know-how" and cognitive dimension consisted of beliefs, perception, ideas, values, emotion and mental models that the workers occupied. 

 

Alternative Panel Estimates of Elasticities for Cigarette Demand in the U.S.

Su-Chen Yang, Chung-Hua University, Taiwan

Dr. Yao-Hsien Lee, Chung-Hua University, Taiwan

Dr. Jian-Fa Li, Chin Min College, Taiwan

 

Abstract

This paper presents a set of more refined estimates of demand for cigarettes in the U.S.  It was found that the dynamic fixed estimator (DFE) outperforms the pool mean group estimator (PMG) and the mean group estimator (MG) in the U.S. cigarette market from 1961 to 1997.  The estimated short-run price elasticity is -0.122 (from the DFE), and the long run price elasticity is -0.716 (from DFE) over the period from 1961 to 1997.  This provides better understanding if the price increases of cigarettes cover settlement costs in the future.   The elasticity of demand for cigarettes plays a crucial role in both the pricing decision of cigarette firms and the government policy to mitigate medical costs from cigarette smoking.   Literature abounds in estimating the price of elasticity (see Table 1).  The demand for cigarettes in the past was estimated either from time series data or from cross-section data (Lyon and Simon, 1968; bishop and Yoo, 1985; Keeler, 1993; Coats, 1995).  Without considering the specific state laws or changes in consumer tastes over time, biased estimations may result.  More recently, researchers have attempted to use the panel data approach to avoid the estimation problems facing either time series data or cross-section data (Baltagi and Levin, 1986 and 1992; Becker, et al., 1994; Keeler, et al., 1998; Baltagi, et al., 2000; Baltagi and Griffin, 2001).  Although a panel estimator might be a better alternative, not all researchers agree on the fundamental assumption: homogeneity of slope coefficients of different regions.  Pesaran, et al. (1999) proposed a new approach (Pooled Mean Group Estimator (PMG)), which constrains long-run coefficients to be identical but allows in the short run the slope coefficients and error variances to be different among groups.  The PMG estimator offers the middle ground between Mean Group estimator (MG) and Dynamic Fixed Effects estimator (DFE).  MG estimator is used to estimate individual separate regressions and calculate the means of the coefficients, while DFE estimator is used to pool the data and assume the equality of the slope coefficients but allows for the difference among the intercepts.  To find more reliable price elasticities for demand, in this paper we intend to apply the three different estimating methods previously discussed to estimate the demand elasticity for cigarettes in the U.S. over the period from 1961 to 1997.  We will also use both the refined import incentive index and export incentive index data sets to estimate the demand function for cigarettes in each state.  The organization of this paper is as follows.  The operational models are demonstrated in section 2.  Section 3 presents conclusions.  Table 2 reports the descriptive statistics of the data set in this study. More detailed data sources are described in the appendix. Due to the availability of the data, included in this study are the consumption of cigarettes, retail price of cigarettes, export incentive and import incentive index, and disposable income per capita for 42 states and the District of Columbia from 1961 to 1997.  And due to the low transportation cost for cigarette shipment and the wide spread state excise tax rates for cigarettes, the smuggling problem might be serious.  Therefore, the sales within the state can be different from cigarette consumption in the state.  The export incentive index (the import incentive index), which is a weighted average of the difference between the exporting (importing) state’s tax and the neighboring states’ tax for each year, is introduced here to control for cross-state substitution.  In addition to the considerable variation of prices among states, the average retail price per pack has increased markedly over time since 1982 (see Figure 1).  Disposable income is measured by thousands of dollars per capita.  According to previous studies, some demographic variables are also included as the regressors.  People over 65 years of age tend to smoke less.  Also African Americans smoke less so do people with a bachelor degree or more advanced degrees.  Therefore, the percentage of people over 65 years of age, African Americans, and people with a bachelor degree or an advanced degree are included in the demand function for cigarettes.  The unit root test among variables becomes a general estimation procedure in order to avoid the spurious relationship between dependent variable and regressors in regression.  To avoid the low power problem with using the conventional Augmented Dickey-Fuller (ADF) unit root test, we adopt the panel unit root test procedure developed by Im, et al. (1997) in which they demonstrated that their t-bar test has superior power performance than the competing test, such as the Levin and Lin panel test (1993), to test the stationarity of variables.  This unit root test procedure is based on the average of individual unit root statistics for panel data and allows for heterogeneity of dynamics and error variances across states.  In this study, we consider a sample of N states over T periods series.  To accommodate for the possibility of different serial correlation patterns across groups, we present their model by the following finite order autoregressive process

 

Purchasing Power Parity and the Base Currency Effect: A re-examination

Dr. Khalifa Hassanain, United Arab Emirates University, Alain, U.A.E.

 

ABSTRACT

This article reexamines the base currency effect for Purchasing power parity theory. The test is conducted using a newly developed nonlinear IV unit root test that accounts for cross correlation for panels made of twenty-one base currencies for industrial countries. We use annual data and allow for different dynamic structure over the free float era. While the choice of the base currency matters for the annual data, the week rejection occurs mostly with European currencies. The rejection is not strong using the German mark. The null is rejected even using the dollar and the yen as base currencies. Volatility appears to be the only significant explanation for the base currency effect. The IV test results show that cross correlation does matter for base currency invariance.  The Purchasing power parity theory (PPP) states that the nominal bilateral exchange rate et, which is the relative price of two currencies, should adjust in equilibrium to reflect their purchasing powers. The theory assumes that all goods are identical in both countries, that transportation cost, and trade barriers are very low. Recently there has been an increasing evidence to suggest that purchasing power parity does in fact hold as a long run phenomena. These studies used mostly the panel testing procedure, a number of these studies found stronger rejection when the German mark rather than the US dollar is used as abase currency  e.g. Jorian & Sweeny (1996) Papell (1997) and Papell and Theodordis (1998) to mention some. Engel et al. (1997) , argued that the same sets of real exchange rates generated by different choices of base currencies are linear combinations of one another, thus, changing the base currency does not change the information that is used in the estimator, only it's configuration i.e. its interdependence. If all elements of one set are stationary, then the elements of the other set must also be stationary, so panel test of PPP should be constructed to invariant to the base currency. O'Connell (1998), showed that under certain condition, controlling for cross sectional dependence in panel tests of PPP make the result invariant to the choice of the base currency. Papell & Theodoridis (2001) argued that O'Connell results are valid only if there is no serial correlation or if the serial correlation properties of each real exchange rate are assumed the same, the violation of these restrictions will result in a base currency effect, this makes the question an empirical one. Papell & Theodoridis (2001) used quarterly data for 21 industrial countries and feasible GLS (SUR). They first fitted univariate ADF regression for each currency, treated the optimal AR models as the true data generating processes for the errors in each of the series, and constructed real exchange rate innovations from the residuals. They calculated the covariance matrix of the innovations, which is not diagonal, preserving the cross-sectional dependence found in the data and used it to generate real exchange rates. The latter have unit root by construction. Their study concluded that numeraire invariance could not be supported empirically. The study also concluded that the evidence for PPP tend to be stronger for European than non-European base currencies. Distance between countries and volatility of exchange rates are the most important determinants of the results. Although cross-sectional dependence is considered under SUR in their study, modeling cross-sectional dependence is more complicated because unlike pure time series models, individual observations in cross-sections display no natural ordering. The IV panel unit root test used in this study allows for a general dependency structure among the innovations that generate data for each of the cross-sectional units and the individual IV t-ratio statistics are asymptotically independent even across dependent cross-sectional units. This study reexamines the base currency effect using annual data for the same set of base currencies. To control for cross correlation the study uses a newly developed nonlinear IV panel unit root testing procedure due to Chang (2002). The rest of this paper is organized as follows: Section 2 explores the data used in the study. Section 3 examines the data using the ADF statistic. Sections 4 and 5 are on the methodology. Section 6 and 7 examine the data using the IV panel unit root test and explain the result. Section 8 concludes.  The data set is taken from the international financial statistics IFS CD ROM 2000 Nov. The price series used to construct the real exchange rates is the consumer price index the CPI, it is the only price series available for most countries in the panel. We also use end of the period exchange rates. The test is implemented with the data sampled at annual frequencies extending from 1974 to 1998 with 500 observations. The use of annual data is justified since we are looking for long-term behavior rather than short-term movement. Twenty-one industrial countries are included in each panel, namely:  Australia, Belgium, Canada, Denmark, Finland, France, Germany,Greece, Ireland, Italy, Japan, Netherlands,New Zealand, Norway ,Portugal, Spain,, Sweden, Switzerland, UK,USA and Austria

 

Perspectives in Consumer Behavior:  Paradigm Shifts in Prospect

Dr. Z. S. Demirdjian, California State University, Long Beach, CA

Dr. Turan Senguder, The Journal of American Academy of Business, Cambridge, Hollywood, FL

 

ABSTRACT

Despite its adolescence, consumer behavior as a discipline has attained a crowning position in marketing.  Many professionals and academics characterize consumer behavior as the key to contemporary marketing success.  Over the years, various approaches based on social sciences have been proposed and applied to teaching and researching the consumer.  Prompted by their ever-increasing complexities, recently the interest in social sciences seemed to have waned.  Although there have not been seismic changes in the field, there have been some shifts in paradigms. As the discipline develops, one important question is to ask as to what approach to adopt for teaching and researching consumer behavior.  To broaden the underpinning theories of consumer behavior, paradigms outside the social sciences could very well be tapped for additional understanding the complex nature of the consumer.  Several frontiers of other sciences seem promising for the understanding the consumer.  As is explained in this paper, the prospects for an interdisciplinary approach outside the family of social sciences appears brighter than ever for thinking outside the “black box” (i.e., mind) and for contributing to its dynamism.  That human behavior is complex, replete with controversies and contradictions, comes as no surprise to marketing academicians as well as practioners.  Consumer behavior is no exception.  Against the backdrop of widespread recognition of consumer behavior as being the key to contemporary marketing success (Hawkins et al. 2003), the fundamental question has been as to what approach to use in the study and teaching of this fascinating academic field?  As Spiggle and Goodwin (1988), Tan and Sheth (1985), and van Raaij and Bamossy (1993) have presented articles in their readings books, consumer behavior over the years, has been the subject of many models and intellectual arguments.  There have been a number of debates between positivistic and interpretive consumer researchers (Hudson and Ozanne 1988). Being a dynamic field, such a condition is normal.  As Kernan (1995) indicates, compared to most academic fields, consumer behavior is relatively very young.  Therefore, the field is still going through growing pains and development. All but several of the pioneers are still living.  Many imponderables enter into the discussion of the methods applied to teaching consumer behavior.  Various assumptions provide different approaches. Early in the history of consumer behavior, Berber (1977) edited a book devoted to various aspects of consumer behavior from the perspective of different disciplines. In the same vein, but from European perspectives, Kassarjian (1994) has shown us the rich and varied scholarly European roots of American consumer behavior. For instance, if behavior is propelled by psychological variables, then the study relies heavily on human motivation, perception, learning, etc. The result would be a psychological model like the one proposed by Howard and Sheth in 1969. The approach to teaching consumer behavior would, then, depend heavily on concepts drawn from research studies undertaken in marketing and psychology. At first glance this paper may look overdrawn, but considering the rich heritage of literature of consumer behavior as Kassarjian (1995) reported in his commemorative article titled “Some recollections from a Quarter Century Ago,” we would be hardly scratching the surface.  With that disclaimer in mind, we first plan to touch upon how this exciting area of scientific inquiry, as an academic discipline and as a field of research, has made use of a blend of economics, psychology, social psychology, sociology, anthropology, and other related social science disciplines.  Secondly, an attempt will be made to answer the question whether the use of social sciences have run their course in building a viable framework of essential principles, concepts, and variables.  Finally, we plan to present some frontiers in other sciences as new paradigms, which seem promising to provide additional knowledge for thinking outside the “black box” for teaching and for researching consumer behavior.  The marketing concept, which enthrones the consumer at the center of marketing strategy, has served as the gravitational force for entrenching the field of consumer behavior in marketing. 

 

A Model for Web Server Security

Dr. Someswar Kesh, Central Missouri State University, Warrensburg, MO

Dr. Sam Ramanujan, Central Missouri State University, Warrensburg, MO

 

ABSTRACT

Organizations are now increasingly dependent on their web servers for business as well as to disseminate both mission-critical and non-mission critical information.  The core and peripheral business of many organizations as well as their image depend heavily on their web sites that reside on their web servers. At the same time, incidents of attacks on web sites by hackers with a multitude of motives have increased significantly in recent years. It is therefore essential to secure web servers to the maximum possible extent.  This paper discusses various facets of web server security and presents a model for web server security based on an analysis of the threats and tools and technologies available to protect these web servers.  With organizations increasingly performing businesses over the web and using the Internet to disseminate information, web servers have became a key component of an organization’s survival. The cost of downtime due to hackers runs into billions of dollars. It is therefore imperative that organizations use the best possible means for protecting the web-server(s).  This paper provides a model for web-server security and makes recommendations on how the model can be used for developing or improving web-server security.  To develop the model, the relationship between the components of web server security is analyzed in pairs.  First, the relationship between the security needs of web servers and the threats to web server security are analyzed. Then, the relationship between threats and technologies to counter the threats and finally, the relationship between technologies and tools to implement the technologies are analyzed.  The model can be useful both for analyzing the current weaknesses of web server security or design a new web-server security infrastructure.  To assess the current security infrastructure, the systems administrator can assess the security mechanisms and tools and see if that will satisfy the current needs. To design the security infrastructure, an administrator can see the organization’s needs and threats and select tools that will support the organization’s security needs. In the process of developing the model, we have first explored the needs of web server security and the relationship of those needs with the threats that disturb or attempt to disturb those threats. Specifically, based on a variety of security literature, the needs of web-server security can be classified into the following:  The need for controlling access and information/authentication; The need for integrity; The need for availability. Access control ensures that only those with valid access can access the resources and those without valid access should not be able to do so.   Moreover, access should be limited to only those aspects of the web server that are needed. For example, systems administrators will typically need access to all aspects of the web servers. However customers placing an order should have access only to the order forms.  The threats to access control can either be physical or logical. Examples of physical threats include the ability to enter a building or room housing the web servers without having legitimate access rights.  Other threats might be adding new nodes with hubs or concentrators with either hardware or software based sniffing capabilities. Logical unauthorized access can be accomplished by impersonating someone with valid rights. This includes stealing passwords or using an authorized password to gain access beyond the rights provided to that password. It has to be noted that physical access can lead to logical access. A sniffer can assist in stealing passwords.  Integrity ensures that only authorized parties can make changes to the documents. A change in the orders placed in an e-commerce environment can be devastating for an e-commerce.  Moreover altering the information content of a web server can provide misleading information. The threats to integrity are similar to the threats of access, however threats to integrity arise only when someone gets access at a level consistent with the rights to alter a document.  Availability ensures that the web servers will be available when needed. In almost all cases, this means having 24X7 availability. Viruses and denial of service are two major threats to web servers. The variety of viruses makes the job even more difficult. These include parasitic viruses that attach to executable files; memory resident viruses that reside in the main memory and spread from there and polymorphic viruses that mutate, making their detection extremely difficult (Stallings, 1999). Boot sector viruses infect the boot record and macro viruses automatically execute and can copy themselves to other documents, make changes, delete files etc.  Denial of service corrupts the hard disk of the web server or uses up the entire memory of the system by bombarding a site with massive amounts of traffic. This can also be achieved by e-mail or by a fraudulent source host that can generate packets with random source address.  We now explore the interface between the threats discussed in the previous section and the available technologies to counter those threats.  Some of these technologies can counter multiple threats.  For example, cryptography is extremely useful because it renders any information accessed meaningless and can counter the threat of packet sniffing as well as gaining access to information.

 

Methods for Maximizing Student Engagement in the Introductory Business Statistics Course: A Review

Dr.  Charles F. Harrington, University of Southern Indiana, Evansville, IN

Dr. Timothy J. Schibik, University of Southern Indiana, Evansville, IN

 

ABSTRACT

Suggestions are offered to create a collaborative teaching environment where active learning is the primary method used to teach business statistics.  Students often claim to find the initial experience with business statistical analysis uninteresting, inapplicable, and uninspiring. The faculty-at-large whether from research universities, comprehensive colleges, or private institutions report frustration in integrating activities designed to invigorate and energize student engagement in first year business statistics courses.  Various alternatives to the lecture class format are suggested in an attempt to encourage instructors to try alternative pedagogies.  The intellectual and practical engagement of students in the undergraduate business statistics curriculum poses significant challenges to faculty regardless of institutional or student body characteristics.  Students often claim to find the initial experience with business statistical analysis uninteresting, inapplicable, and uninspiring. The faculty-at-large whether from research universities, comprehensive colleges, or private institutions report frustration in integrating activities designed to invigorate and energize student engagement in first year statistics courses.  Historically, business statistics curricula have favored theory over application and cursory attention over practice and competency. However, student expectation, the demand of graduate education and workplace statistical competencies, and accreditation body criteria have shifted the curricular focus to the interpretation and meaning of statistics rather than on the rote memorization of abstract mathematical concepts.  Providing students with opportunities to develop their skills and abilities as consumers as well as practitioners of statistics and statistical analysis is paramount if business students are to be sufficiently equipped for the world of work.  The development of these skills and abilities are significantly more important for those students taking advanced coursework in business statistics, research methods or preparing for graduate level study. In the early 1990s, many authors have called for changes in statistics education (e.g., Hogg, 1991, 1992; Moore, 1992; Cobb, 1992, 1993; Snee, 1993; Snell and Finn, 1992), yet very little if any change has occurred over the last decade since these calls.  The aim herein is to provide suggestions for change from the literature that will create a collaborative teaching environment where active learning is the primary method used to teach business statistics.  Further understanding by statistics instructors of the intellectual and social contributions afforded through service learning, the integration of technology into the delivery of the curriculum, and the benefit of writing-intensive assignments can each contribute significantly to improving student engagement in business statistics.  This rather discouraging assertion from George Cobb of Mount Holyoke College follows from two paths of research results the first of which illustrates what makes learning statistics hard and lecturing in statistics classrooms often ineffective and the second that shows what does seem to work when lecturing does not.  First, basic statistical concepts are hard and misconceptions persist.  Ideas often found in probability and statistics are difficult for students to learn because they conflict with many of their own beliefs and intuitions about chance and data. Students correct erroneous beliefs reluctantly - only when their old ideas do not work. In this situation learning is enhanced when students are forced to confront their misconceptions, a process for which lectures are not generally effective. Second, learning statistics is constructive (Higbee, et. al, 1991). To absorb the full impact of these three words, you have to push their implied metaphor to its limits; concepts are constructions, learning is building. Common sense principles of carpentry, applied to the process of teaching and learning, lead to the same conclusions as those derived from research on how students learn: to teach students to build, spend less time lecturing and spend more time on site, where you can focus your comments on the work students are actually doing.  Taken together, the two sets of results lead to the main recommendation of this paper, namely teachers of statistics must find ways to foster active learning in their statistics classrooms.  What does all this mean for the teaching of business statistics?  How can faculty enhance student learning and revitalize the undergraduate business statistics curriculum by facilitating student engagement in the learning process?  Several recommendations for business statistics instructors are provided herein:  

 

From Industrial Revolution to Managerial Evolution: The Case of IBM Credit Corporation

Dr. Hui-Kuan Tseng, University of North Carolina at Charlotte, Charlotte, NC

 

INTRODUCTION

We ought to admit that we are living in a time of ever-changing. It is the first time in the human history that the humans are capable of producing mass information far more rapidly than they can absorb.  Humans have sped up the adapting process far beyond they can imagine. The rapid technological advancement has made the knowledge and information the key competitive advantages. The Quality Movement in 1980 brought to the consumers better quality products with lower prices.  The industry power is switched toward consumers ever since. The internationalizing of marketing and the overall raising of living standards brought opportunities on the one hand, and competition on the other. With the emergence of the economic powers in the Asia Pacific region, competition is increasing  globally. The global competitions, the prevailing of the Internet, and the soaring technology have a great impact on the conventional way of organization and operation. On the global scale, any managerial personnel with vision strive to discover innovative and competitive ways of management to adjust to the situation. The new industrial management models have been developed speedily. Nowadays, the participatory management, flattened organization, and empowerment have become the jargon of the industrial managers. A great number of companies strive to increase their competitiveness and efficiency.  The Industrial Revolution which first got its start in Great Britain changed the ways by how the world produced its goods.(1)  The effects of the Industrial Revolution were far-reaching. It brought new doctrines – Laissez-faire, Capitalism, Democracy into political arena.  Sabel [1985] and Matthews [1986], among others, studied the effects of the Industrial Revolution on politics. The Industrial Revolution also led to the expansion of trade, commerce and banking. Cameron [1982] and Jones [1984], among others, examined the effects of Industrial Revolution on banking and trade. The invention of technology resulted in increased production, but it also created social evils. Workers were paid low wages and were enforced to work long hours, especially employed women and children. Smelser [1959] and Thompson [1967], among others, studied the effects of the Industrial Revolution in social area.  This paper is to examine the effect of the Industrial Revolution on business management using IBM Credit Corporation as a case study. In this paper, we trace IBM Credit Corporation’s business managerial model back to the history of industrial revolution. Why IBM Credit does what it does?  Why Adam Smith’s principle of the division of labor is not desirable in IBM’s business managerial model?(2)  Section II reviews the history of industrial revolution.  Section III examines IBM Credit Corporation’s business managerial model.  In Section IV, we conclude our study.  IBM Credit Corporation was not a high tech company Corporation.  Her business was to provide financial services for IBM customers, who leased or purchased IBM computers.  It was a wholly owned subsidiary of IBM.  Hammer & Champy (1994) illustrated IBM Credit’s work process. When a field salesperson found a customer who had a desire to buy IBM computers, the salesperson called the sales department located in the headquarter.  One of the 14 clerks in the sales department received the phone call, and took down the name and other information of the potential customer.  In about a week, the salesperson would receive a Federal Express.  In the Express mail, there would be a copy of contract with financial terms for the customer to sign.  The salesperson would go to the potential customer immediately.  Often times, the potential customer would no longer be interested in buying IBM products.  The potential customer had changed his or her mind and found another computer system for their need.  Perhaps the potential customer’s loan from another financial institute had been approved.  Within this week, the salesperson had called and called again to expedite the order, but it seemed that no one in the whole company had a clue where the order was.  The salesperson was so hopeless that he/she felt that the order had been lost in a bureaucratic black hole. Why could IBM have such a stiff organization structure, so inefficient and non-responsive to the customers needs? After all, IBM was just applying Adam Smith’s principle of the division of labor.  The managers in the IBM Credit Corporation tried to fix the problem. The managers established a customer service representative position and imposed an executive command that all customer orders had to go through this position and be recorded when transferring from one department to another.  When salespersons called to inquire about their customer order, they knew exactly where their customer orders were located.  Although salesperson knew where his or her customer order was and which department was being working on this order, the overall lag time between the salesperson called in to place the order and received the contract was even longer.  So the managers figured out another solution.  They imposed another executive order to expedite the work process.  They imposed a 4-hour limit for each department to work on a customer order, which means: when a department received a customer order, it had four hour to work on it and passed it to the next department. To their great disappointment, although the rules were followed strictly, the over cycle time did not improve.  To avoid being penalized for not being able to finish within the 4-hour limit, the workers in one department would try to find a minor error in the document and send it back to previous department, if they could not finish their work on time.  This, of course, would prolong the overall cycle time.

 

 Developing New Markets for Turfgrass-sod in the United States

Dr. John J. Haydu, University of Florida, Mid-Florida Research and Education Center, Apopka, FL

Dr. Alan W.  Hodges, University of Florida, Food & Resource Economics Department, Gainesville, FL

 

ABSTRACT

Three years of research examining market opportunities for turfgrass-sod was conducted in the eastern (1999), central (2000), and western (2001) regions of the United States.  A total of 1,248 firms, representing eight distinct Standard Industrial Classifications (SIC), were surveyed.  Data were analyzed by geographic region and type of business.  Results indicate that considerable differences exist across these categories with respect to market outlets, grass varieties used, and purchasing criteria of customers.  Market outlets have shifted dramatically in recent years from a direct selling approach (from the farm direct to the customer) to more indirect selling through large retail chains.  Major grass varieties used by consumers was largely a function of geographic location where climate restricts optimal growth, rather than problems associated with market outlets.  Primary purchasing criteria were product quality, followed by price, product availability, and delivery, although results varied somewhat by type of business.  Cultivated turfgrass is a pervasive feature of the urban landscape in the United States and many other regions of the developed world.  It is preferred as a vegetative groundcover to reduce soil erosion, absorb pollutants, dampen noise, and to provide a comfortable, durable, and aesthetically pleasing surface for outdoor activities.  Turfgrass is a major characteristic of home lawns, commercial landscapes, golf courses, hotels and resorts, and public institutions, including schools, cemeteries and airports.  The turfgrass industry is an incredibly diverse and economically important component of the horticultural industry.  The USDA estimated in 1997 there were over 300 thousand acres of turfgrass-sod produced in the United States, representing a farm gate value of $800 million (USDA, FLO-2002).  While large, this value represents a small portion of total economic activity generated by turfgrass production.  For instance, while Florida produced nearly 80,000 acres of sod in 2000, it was estimated that roughly 5 million acres of turfgrass was being maintained by families, commercial businesses and institutions (Haydu et al, 2002).  Considering that the average Florida homeowner spent over $1,300 on their lawn in 1992, the dollars generated statewide on turfgrass maintenance are potentially enormous (Hodges et al, 1994).  In spite of this large and robust industry, many sod producers have experienced weak demand in terms of declining prices and square feet of turfgrass sold.  As a consequence, the International Turfgrass Producers Foundation (ITPF) funded a three-year study to identify strategies to expand the market for sod.  Traditionally, once sod leaves the farm, it usually passes through one or more marketing channels — for new residential or commercial developments, for re-landscaping existing developments, for sports turf facilities such as athletic fields and golf courses, for commercial applications that include businesses and public and private schools, and for roadside uses (Haydu et al, 1998).  A conceptual illustration of product flows within the sod production-marketing system and its major players is shown in Figure 1.  For simplicity, the sod market is divided into two primary sectors — new developments, comprising roughly 75 percent and existing homes and commercial businesses covering the remaining 25%.  For new developments, it is estimated that roughly a third of total volume is sold through landscape contractors, and the other two-thirds by sod installers.  In essence, these segments represent the array of possibilities that producers must consider in their marketing strategy. The final customer can be the homeowner, a golf course facility, a cemetery, or a public institution.  Each of these potential customers has different needs and expectations.  Although the customer may decide the type of sod to purchase, in many cases this decision is left to the landscape contractor or installer.  Hence, both sets of customers should be considered carefully in any type of marketing program.  The purpose of this study is to address these marketing issues and to identify practical strategies for expanding sod markets.  The report consists of three parts.  Part one presents a brief overview of the survey methods employed.  Part two introduces important findings of the research in the western region and, where applicable, compares results with the other regions.  Part three offers specific marketing recommendations, based on conclusions of the study.  Given that the study area was to cover the entire United States, the country was subdivided into three regions — the eastern, central, and western — with each region requiring one year to complete.  The first year began with the eastern region, which was further subdivided into three subregions:  1) Northeast (Connecticut, Maine, Massachusetts, New Hampshire, Vermont, Rhode Island, New York); 2) East Central (Delaware, Maryland, New Jersey, Pennsylvania, Virginia, Ohio, Michigan, Indiana, Kentucky, West Virginia); and 3) Southeast (Alabama, Mississippi, Georgia, Florida, South Carolina, Louisiana, Arkansas, Tennessee, North Carolina).  The central region was surveyed in year two, being further subdivided into:  1) North-Central (Illinois, Iowa, Minnesota, Nebraska, North Dakota, South Dakota, Wisconsin) and 2) Arkansas, Kansas, Louisiana, Missouri, Oklahoma, Texas).  The western region was surveyed in year three, being further subdivided into:  1) West-interior (Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, Wyoming) and 2) West-coastal (Alaska, California, Hawaii, Oregon, Washington).

 

Attrition of Agency in Real Estate Brokerage

Dr. Bruce Lindeman, University of Arkansas at Little Rock, Little Rock, AR

 

ABSTRACT

In the past two decades, the agency function in real estate brokerage has evolved from representation only of sellers to include agency representation of buyers. This evolution has led to a variety of problems, especially the likelihood of conflict of interest when agents attempt to represent both buyer and seller in a transaction.  Because the legal considerations are controlled largely by state law, attempts by the states to resolve these problems have provided an interesting variety of “laboratory experiments”, none of which (so far) seems to have achieved a desirable solution. This paper includes a history of past developments, a summary of the current situation, and some reflections upon the implications of these problems.  Under agency law, agents are employed by principals to represent them in some way. An agent must (1) obey the principal’s instructions (so long as they are within the law), (2) be loyal to the principal’s interests, (3) act in good faith, (4) use his/her professional judgment, skill, and ability, (5) account for all money belonging to others that comes into his/her possession, (6) perform agency duties in person and (7) keep the principal fully informed as to developments affecting their relationship. A key element of agency is that agents have a duty to act in the principal’s best interests, even if so doing requires the agent to act against his/her own personal best interests.  Since the inception of real estate license law a century ago, real estate licensees in all states have been defined as agents. Real estate licensees are limited agents: they solicited offers to buy, sell or lease and assisted and advise their principal during negotiation. Normally they do not, on their own, commit the principal to anything; it is the principal who decides how to respond to offers. Nonetheless, agency representation by real estate licensees can provide significant advantages to buyers and sellers, who can rely upon knowledgeable licensee-agents to assist and advise them in a complicated and daunting transaction. An experienced licensee’s knowledge of the market, the complexities of real estate transactions and, sometimes, even the situation of the other party can be very valuable to a potential buyer or seller.  Until the 1980’s, licensees everywhere represented only the seller in a real estate transaction. In this environment the primary agent is the listing broker who has a listing contract with the seller (the principal).  All other licensees involved in a transaction (salespersons, licensees associated with other firms) are subagents of the listing broker and, therefore, agents of the seller as well. Although prospective buyers interact with licensees (who show them properties), buyers have no agency representation in the transaction when the market environment supports only seller agency.1  Real estate brokerage firms earn their income by collecting fees (usually a percentage commission based upon the sale price) from sellers when the seller’s property is sold. It is important, also, to understand that real estate brokerage firms do not operate in a vacuum. They usually “share” their listings reciprocally with other brokerage firms in the same market. (Listings are contracts with sellers to solicit offers from prospective buyers; in effect, listings are the firm’s “inventory”.) They allow each other access to their listings – that is, to show each others’ listed properties to their own buying prospects. In this way, each firm has access to a much larger inventory, and generally results in quicker sales at better prices. The cooperating firms share the listing broker’s fee in some agreed-upon manner. In larger markets where there are several brokerage firms, a multiple listing system usually exists. This is a formal arrangement among participating brokerage firms in which they agree to make their listings available to the others to show, and agree on the manner in which the selling commission is to be split.  Until the 1960’s the legal environment of the residential real estate brokerage business was exclusive seller agency representation and caveat emptor (“let the buyer beware”). In this environment buyers were legally on their own, and sellers and their agents had considerable leeway in their selling techniques. Courts typically allowed relief to aggrieved buyers only in cases of egregious misconduct (fraud, misrepresentation, failure to disclose a “dangerous condition”, etc.) During the 1960’s and 1970’s, however, court decisions gradually swung more in favor of buyer equity, and state license laws were changed to require licensees to treat both parties “fairly and equitably”.  Exclusive seller representation is straightforward and has no inherent agency conflicts, provided that the Licensees involved understand and respect their agency duties to the seller.  However, real estate licensees always have interacted intimately with buying prospects.  In fact, they spend much more time in the company of prospective buyers than they do with the sellers, analyzing their needs, showing them properties, encouraging them to make offers. Also, licensees are, first and foremost, salespeople. A basic tenet of salesmanship is to earn the prospect’s trust; a natural result is that could take on the appearance, and often the actuality, of looking out for the buyer’s interests. Therefore, buyers often assume that agents working with them are working for them as well. In the “old days”, a typical promotional message to buying prospects was “Our service costs you nothing!” while neglecting to add that “this is because, legally, we aren't allowed to serve you.”  Indeed, many licensees acted and performed as if they did represent the buyer's interests, offering advice and counsel that clearly violated their agency loyalty to the seller.2

 

A Dynamic Econometric Modeling of the U.S. Rice Market

Dr. Sung Chul No, Southern University and A&M, Baton Rouge, LA

Dr. Hector O. Zapata, Louisiana State University, Baton Rouge, LA

 

ABSTRACT

Over the past three decades, developments in time series analysis have brought new approaches for combining structural characteristics of market models with stochastic processes that better represent available data: structural econometric and time series analysis (SEMTSA) and structural vector autoregressive model (SVAR).  The paper provides an empirical evaluation of these two approaches for the U.S. rough rice market.  Transfer functions, derived from the SEMTSA model, were estimated.  The turning point, RMSE, and MAPE evaluation revealed that the TF model provide accurate forecasts and greatly reduces forecasting errors relative to the existing structural and ARIMA models for the fundamental rice market variables in an out-of-sample period (1990-1999).  The research also addressed the empirical usefulness of combining structural-statistical properties of economic data in commodity modeling.  A comparative analysis of the impulse response functions revealed that the estimated effects in the VAR model of specific behavioral shocks often do not appear economically intuitive.  Having imposed structural relationships in a time series context, the study found that most response functions in the SVAR model are in confirmation with economic logic, with empirical results far superior to those generated from a VAR in levels.  These empirical findings in favor of the TF and SVAR models stem from a common methodological approach, which combines economic theory with statistical properties of time series.  The research findings suggest that a significant contribution to commodity modeling can be derived from this type of approach.  This conclusion is supported by the empirical findings from economic model of the U.S. rough rice market.  Over the past three decades, developments in time series analysis have brought new approaches for combining structural characteristics of market models with stochastic processes that better represent available data.  One line of research is the works of Zellner and Palm (1974), Plosser (1978), Wallis (1983), and Webb (1985), which is known as structural econometric and time series analysis (SEMTSA).  The other approach (Sims, 1986; Bernanke, 1986; Blanchard and Quah, 1989; Keating, 1990; Amisano and Giannini, 1997; Bernanke and Mihove, 1998) is the structural vector autoregressive model (SVAR), which is an economic-theory enhancement to the standard VAR approach.  Considerable research has been published on large-scale structural econometric modeling of the U.S. rice market (O’Carroll et al.,1977; Grant et al., 1984; Watanabe et al., 1990; Adams,1994).  The framework on which these econometric models for the U.S. rice market quantify economic behavior is based on theory and knowledge of economic and institutional characteristics. Economic relationships among rice market fundamental variables are intertwined and complex.  Policy shocks, for instance, may stimulate acreage responses which may take years to settle.  Producer responses to changes in domestic or world conditions may also be gradual.  Thus, the change is not simply a matter of instantaneous adjustment from one equilibrium to another as the classical static structural econometric models have assumed.  On the contrary, the change may be a matter of adjusting between equilibria over a period of time, with the pattern and speed of the adjustment depending on the nature and degree of disequilibrium in the system.  Of course, the pattern and speed of the adjustment and degree of disequilibrium are empirically testable.  To our knowledge, this line of dynamic analysis for the U.S. rice market, using impulse response functions (IRFs), in a structural time series framework, has not been published.  Furthermore, forecasting market fundamental variables for the U.S. rice market is an important component of the existing econometric structural models.  An evaluation of forecasting performance of existing rice models revealed that considerable discrepancy between actual outcomes and forecasting values existed.  To address these two issues, the current paper adopted the SEMTSA and SVAR models.  The former is to generate forecasts for the U.S. rice market and the latter is to provide dynamics of the market via comparative evaluation of traditional vector autoressive (VARs) and SVARs models.  The plan of the paper is as follows.  The data is first described.  Secondly, a procedural aspect of the SEMTSA approach is depicted.  This section contains how transfer functions, a primary forecasting model for this study, are derived from a structural econometric model.  The latter part of the section outlines the method for IRFs derived from the standard VAR in levels (VARs) and in differences (DVARs) and SVAR models.  Thirdly, the study provides the empirical results of forecasting performance analysis and comparative evaluations of the VAR, DVAR, and SVAR models.  Lastly, the paper summarizes and concludes.  Data are annual from 1960 to 1999, consisting of aggregated acreage planted (APT), yield per harvested acre (YD), production (PD), domestic consumption (QHM), export demand (QEX), ending stocks (QES), farm price (FP), cost of production (CP), milled rice price (PUS), and disposable per capita income (PCI).  APT, YD, and PD QHM, QEX, QES, FP, PUS, and CP data for years 1960 through 1992 were obtained from “The U.S. Rice Industry”, while data from 1993 to 1999 were collected from the “Rice, Situation and Outlook Yearbook.” QHM was obtained from “Rice, Situation and Outlook Yearbook” (ERS/USDA, 2001).  The data for PCI was obtained from “Economic Report of the President.” (2001).

 

From General System Theory to Total Quality Management

Dr. Te-Wei Wang, Florida International University, Miami, FL

 

ABSTRACT

This paper evaluates the theoretical ground for total quality management (TQM). General systems theory (GST) is the referenced theoretical framework. TQM and GST are contrasted side by side from two perspectives: theoretical assumptions and implementation methods. The comparison between system approaches and TQM practices show that TQM is a true system approach. TQM provides principles and techniques implement GST. TQM utilizes cybernetic principle to deal with traditional efficiency and productivity problems. It also prescribes many practical methods and technique for building a learning organization.  TQM has long been criticized by the lack of guiding theories (Sitkin et al, 1994). Starting in the 1990s, many researchers have tried to put TQM under rigorous theoretical examination. For example, Grant, Shani, and Krishnan (1994) compared TQM with traditional management theories. Hackman and Wageman (1995) accessed the coherence, distinctiveness, and likely perseverance of the TQM philosophy. Dean and Bowen (1994) also found substantial overlap between TQM and management theory. Following their works, this paper describes and examines TQM from the perspective of general system theory (GST).  Why study TQM through GST? First of all, TQM and GST have many similarities. TQM and GST both covers great deal of the same ground of management theories. In Dean and Bowen’s (1994) words, “Both TQM and GST are considered as Interdisciplinary studies and they often transcends the boundary of existing theories.” Furthermore, GST is a meta-theory which can be used to bridge many simpler models with different assumptions. It is believed unavoidable for social scientists to study systems (Kast, & Rosenzweig 1972; Bailey, 1992:63; Hanson, 1995, Kaynak 2003). TQM also possess many features of a meta-theory. In fact, TQM is usually described as a total system approach (P&G 1992, Sitkin et al. 1994). One of TQM’s major principles is the appreciation of systems (Leonard and McAdam, 2003). Therefore, a comparison of TQM with other meta-theory can help to identify the theoretical foundation for TQM (Vancouver, 1996). GST contains a big body of theoretical as well as methodological knowledge. The present paper discusses only the major principles in GST.  Specifically, the topic of how general system theory relates to organization sciences is reviewed. Furthermore, TQM is compared with GST based on a list of common principles found in many contemporary GST theorists.  System thinking can be traced back very early in human history. However, the term General System Theory (GST) had not been invented until 1950's. Biologist von Bertalanffy integrated earlier system thinking and proposed the General System Theory. The major concern of GST is to provide a superstructure, which can be applied to various scientific fields. von Bertalanffy’s work stimulated many theorists to develop systems theories in one form or the other. Examples of such theories are numerous. Economist, Kenneth Boulding, wrote on GST as a basic structure of science in 1956. In organization sciences, William Scott (1961) linked GST with organization theories. Seiler’s framework (1967) identified a behavioral system to model organizations. Churchman (1968) delineated five considerations for organizational system based on the definition of a system. Herbert Simon pointed out the complexity and challenge of applying GST to organizations (1969).  GST covers the discussion of both mechanistic/close/nonliving systems and organic/open/living systems. Mechanistic view of systems was used extensively in nature sciences such as thermodynamics. In business management, many earlier paradigms such as Taylor’s scientific management also have close system assumptions (Sitkin et al, 1994). However, the modern organization theory is more in tune with the open systems theory (Sitkin et al, 1994). The organization model presented by Seiler (1967), for example, is an open systems model. Among various open system theories, the most famous one is the living system theory (LST) proposed by James G. Miller (1965). He set forth 165 hypotheses, stemming from open systems theory, which could be applicable to two or more level of systems (Miller 1965, 1978). LST identifies a hierarchy of eight levels of living things from cells to supranational systems (Cells, Organs, Organism, Group, Organization, Communities, Societies, Supranational Systems). For each level, twenty processes (called subsystems) are identified as critical to the survival of the systems. The contribution of Miller’s LST to organization study is that it excluded the nonliving systems from GST and addressed organization related issues such as structure, process, subsystem, information, system boundary, growth and integration. The LST set forth the stage of system study in the social science.  More recently, the LST electrified many new frameworks in the study of social systems. In sociology, Bailey (1992) proposed a Social Entropy Theory. Bailey (1992) also integrated his Social Entropy Theory with other contemporary systems theories such as neofunctionalism (Alexander and Colomy, 1990), structuration theory (Giddens, 1979) and LST. He called the integrated set of knowledge as New Systems Theory. In England, Hanson (1995) wrote on 31 general systems theories, which are related to behavior science.

 

Compensation Structure, Perceived Equity and Individual Performance of R&D Professionals

Jin Feng Uen, National Sun Yat-sen University, Taiwan, R.O.C.

Shu Hwa Chien, National Sun Yat-sen University, Taiwan, R.O.C.

 

ABSTRACT

In order for individuals to improve their work performance they must be sufficiently motivated and compensation is the most important source of motivation for professionals. A major issue in designing a compensation structure for such individuals is the equity they perceive they are gaining. For this reason, this research discusses the relationship between compensation structure, perceived equity and individual performance. After surveying 258 R&D professionals from high-tech organizations in Taiwan, we put forward the argument that skill-based pay and job-based pay influence R&D professionals into believing they are receiving an enhanced equity which will then lead to a better performance on their part. High-tech organizations succeed through a combination of innovation, conceptualization, and commercialization of new technological ideas (Newman, 1989). They emphasize techniques in business strategy; and investment in R&D activities accounts for a relatively high proportion of their total expenditure (Milkovich, 1987). It has long been recognized that R&D professionals function as a separate occupational group in manufacturing organizations with a remit to provide those organizations with a competitive edge. Therefore, in order to gain this competitive advantage it requires R&D professionals in high-tech organizations to be properly motivated.  Compensation is a critical factor to consider when strategic business planning is being undertaking (Lawler, 1995) since it is not only a question of labor costs but also employee motivation. Compensation is the most important source of motivation for professionals, including R&D specialists. On account of this, a high-tech organization must develop compensation packages capable of attracting, retaining and motivating R&D professionals (Milkovich & Newman, 1999).  A major issue in designing such a compensation structure is perceived fairness (Milkovich & Newman, 1999; Konopaske & Werner, 2002). Most studies on compensation have focused on pay level and pay structure, with few of them discussing the relationship between compensation structure, perceived equity and individual performance. Here, we attempt to discern what the important features of an effective compensation structure are by reviewing the literature, and then discuss the relationship between compensation structure, perceived equity and individual performance.  Compensation is a form of reward that organizations use to motivate employees to behave in ways they desire. Compensation structure can be divided into three types of remuneration--- skill-based pay (SBP), job-based (JBP), and performance-based (PBP) (Lawler, 1987). SBP means that compensation is determined by the employee’s skill and knowledge (Zhu, 1996). This is because as employees acquire greater expertise, they become more adaptable, capable of performing multiple roles, with a broader understanding of the work process, and thus become more aware of their contribution to the organization (Flannery, Hofrichter & Platten, 1996) and the importance of their role within it. JBP means that compensation is determined by the degree of difficulty, responsibility, and relative value of a job (Henderson, 1989). Finally, PBP means that compensation is determined by the employee’s output or performance.  Equity as the ratio of the individual’s inputs to the individual’s outputs is equal to the ratio of the comparison person’s input to the comparison person’s output (Adams, 1965). Two main categories of equity can be distinguished: distributive equity, which is the evaluation of outcomes received in the exchange relationship with the organization; and procedural equity, which is the equity of allocation processed, or the way superiors generally arrive at decisions (De Boer, Bakker, Syroit, & Schaufeli, 2002)  In SBP, not every employee has equal opportunities to learn new skills or knowledge within a company. This problem may become more serious when the contributions of the trained employees do not increase. Lawler and Ledford (1985) argue that SBP may even increase the training costs of an organization. Furthermore, because of problems of skill evaluation and the complexity of managing the training and its application an organization would experience significant difficulties.  Job evaluation is the main technique that determines the value of a job as regards JBP. Because every unit in the organization performs different jobs and perhaps uses various procedures and methods, it is not easy to consistently and fairly evaluate even those job posts that appear similar but are located in different units. Kanter (1987) believes that techniques of job evaluation are unable to fully outline the employee’s contribution. Weiner (1991) claims that it is not easy to maintain internal equity across a whole organization.

 

The Content Continuum: Extending the Hayes & Wheelwright Process-Product Diagonal to Facilitate Improvement of Services

Dr. Tony Polito, East Carolina University, Greenville, NC

Dr. Kevin Watson, University of New Orleans, New Orleans, LA

 

ABSTRACT

The explanatory power of the Hayes & Wheelwright Process-Product Matrix, as well as the diagonal embedded within it, fails under counterexamples of mass customization. When the diagonal is extended into the realm of service products using Schmenner’s Service Process Matrix, an expanded framework emerges. That framework, herein coined The Content Continuum, appears to be highly explanatory and finds good fit with many existing service classification schemes. A number of Original Levi's retail outlets offer made-to-order women's bluejeans on a mass customization basis. Customer measurements are entered at the POS terminal and directed to a numerically controlled cutting device at the company's Tennessee plant. Levi's customization strategy effected a 300% increase in sales and a simultaneous reduction in inventory at introduction (1994a). Toward further improvement, the company has co‑developed a point‑of‑sale "body scanner" expected to decrease response time and improve the quality of the process (1994b). The application is not unique; Tom Peters notes a similar process for the tailoring of suits at Saks Fifth Avenue (Peters, 1987). Anderson windows, Motorola pagers, and Hallmark Create-A-Card vending machines provide examples of mass customization from other industries. Even McDonald's, the bellwether of Levitt's industrialized service (1972, 1976), now carries hundreds of menu items targeted by region, rotates specialty items seasonally or monthly, and offers its once standardized burgers on an assemble-to-order basis. In some ways, such mass customization (Pine II, 1993) implies a shift towards craft shop production, including higher product heterogeneity and increased levels of customer involvement, specification, and delivery convenience. However, it also expects increased volumes, economies of scale, capitalization, and commodity‑like behaviors, as found in flow production of goods. These contradictions in the trend to mass customization represent directly opposing shifts along the main diagonal of the Hayes and Wheelwright Process-Product Matrix (1979a, 1979b) to depict the relationship between a product's growth and volume and its process technology. The Process‑Product Matrix, a generally accepted operations management framework, is robust with implications for strategy, operations, and marketing. Here, however, it falters under the higher service content of the mass customized product. Increased volumes, economies of scale, capitalization, and commodity-like behaviors do, however, represent an outward shift along the diagonal within the Service Process Matrix, developed by Schmenner (1986, 1993, 1995) for equivalent analysis of service products and processes.  Industry is rich with examples of trends to increase the service content of goods, a trend that suggests the need for a framework which places the Process-Product Matrix within the context of products viewed as bundles of goods and services. This paper first reviews a relevant sampling of dominant service perspectives that suggest such a framework. Next, a framework that merges these perspectives with the Hayes & Wheelwright Process-Product Matrix is figured and discussed. Next, the automobile and dining industries are used as illustrative examples of the functionality of the framework. Some anticipated questions are then proactively addressed. Finally, a comparison is made with a number of other service classification schemes to demonstrate an acceptable degree of harmony. Hill's economic analysis of goods and services (1977) provides a foundation for synthesis. Both goods and services are transaction-based; the transfer of ownership identifies a good, and the change in the condition of an object identifies a service. Hill's perspective allows for the bundling of goods and services within a single product transaction; a position supported by numerous researchers (Bell, 1981; Fitzsimmons & Sullivan, 1982; Foxall, 1985; Kotler & Armstrong, 1980; Levitt, 1972; Rathmell, 1966; Sasser, Olsen, & Wyckoff, 1978; Shostack, 1977). Advocacy for bundling from a practitioner perspective is exemplified by Lexus management, re-defining its product, not as a manufactured good, but as a luxury service package. Hill also recognizes the potential utility of a good or service, and delineates it from the underlying transaction; the interpretation of service as consumers transacting for utility finds modern support as well (Hsieh & Chu, 1992; Murdick, Render, & Russell, 1990). If both goods and services are viewed by consumers as creating utility, then a single product variously bundled with proportions of goods and services allows for the efficient and rational consumer to effect equivalent utility substitutions, e.g., grocery stores versus restaurants in the case of food products. Therefore, a specific product should be viewed on a continuum representing its relative proportion of goods and services, a conclusion reached by others (Rathmell, 1966; Sasser et al., 1978; Shostack, 1977).

 

Japan’s Liquidity Trap: An Empirical Analysis

Sujata Jhamb, IILM, Institute for Integrated Learning in Management, New Delhi, India

 

ABSTRACT

The liquidity trap is a phenomenon which may be observed when the economy is in severe recession or depression. The real GDP stops growing and the price level are stable or falling. The nominal interest rates are close to zero and cannot decline further, the speculative demand for money become infinitely interest elastic. Any increase in supply of money will not be used to purchase government bonds but will be hoarded as idle cash balances. Money and short-term government securities become perfect substitutes as the yields from holding both are zero. In such a scenario only policies other than monetary policy can help raise output and employment.  This paper studies the prolonged recession in Japan during the period of the nineties and finds that liquidity trap has been one of the main reasons for ineffective monetary policy leading to a limping Japanese economy. The current liquidity trap can we treated as ‘partial paralysis’ of the Japanese financial system. Saving behaviour is important because it helps to determine the evolution of future consumption opportunities. Japan’s post war savings behavior can be viewed in this light. Saving has enabled Japan to increase stock of assets rapidly, increase worker productivity, rapid rates of economic growth and raise standard of living.  A comparison of Japanese gross domestic savings with other major industrial countries for the 30 year period - 1970-2000 shows us that, there are wide disparities in saving behavior across countries and by large margins: The other countries are clustered in terms of their saving rates, with U.S. & U.K. at low end of the spectrum. This shows that Japan’s saving rate is high, both in absolute terms and relative to other countries.  Too little saving will be sub-optional in that a low level of capital formation will result in a low level of sustainable consumption.  Japan’s severe recession, impending financial crisis and liquidity trap situation motivated us to study, a new variable- the savings behavior in Japan. In this paper, we use high savings as a main cause of liquidity trap, rendering monetary policy, ineffective.  The argument that we present is that despite recession, and extremely loose monetary policies, savings have been more or less the same, not responding to falling interest rates. Therefore, we first need to analyse the savings trend in Japan and then empirically determine causes of such high savings that have led to this liquidity trap situation.  Japan’s high saving rate relative to these of other industrial countries gives rise to the question of whether Japan is saving “too much”. Too much saving, however, can also be sub-optional because present and future consumption opportunities are forgone in favour of building and maintaining stock of capital.  This saving behaviour has motivated us to fit a model to estimate savings in Japan.  This is because, we see high savings in Japan as a major culprit for the liquidity trap situation prevailing in Japan. Hence, we were motivated to fit an empirical model for savings for the period 1970-2003 in a multiple regression framework in this study. In this paper, we create an empirical savings model based on time-series data for 30 years for Japan from 1970-2000. Using a priori knowledge, the savings function takes the following implicit form:  Gross Domestic savings   f (Deposit Interest Rate, GDP growth, Inflation, Age-dependency)  Hence, we postulate that gross domestic savings in Japan should depend on the interest rates, GDP growth, inflation rate and we introduce a very new variable not used in most of the earlier research done on savings is Age-Dependency Ratio - i.e. The ratio of dependents {(>65 years) and (< 15 years)} to the total working population. Hence, we fit the savings function in a multiple regression framework to capture the causes of the rigidity in savings which is responsible for the liquidity trap, and our research reveals that the most important variable that reflects this behaviour is age-dependency ratio.  This paper uses a regression based framework using time-series annual data for Japan on key variables in the Japanese economy like savings.  We use multiple regression model for analyzing the saving behavior of Japan that would try to explain the liquidity trap situation currently prevailing  in Japan .

 

Teaching a Research-Oriented, Graduate Global Marketing Course to Adult Learners in a One-Month Format

Dr. Juan Espańa, National University, La Jolla, CA

 

ABSTRACT

This paper presents the author’s experience teaching a heavily research-oriented, applied graduate course to a class of working adults in a one-month format. A common concern among instructors of research-oriented courses in a compressed format is that the term length might not allow for a thorough treatment of the theoretical issues involved and their application to real-life situations. As the author explains, appropriate coverage can be achieved within the framework of a one-month course format. The key to success is the early use of efficiency-enhancing mechanisms aimed at, among others, upgrading students’ library and research skills at the very beginning of the course, facilitating and jumpstarting students’ research and ensuring high levels of student participation. There is a growing body of research indicating that compressed teaching formats such as one-month courses lead to learning outcomes that are at least comparable, if not superior, to conventional semester-long formats lasting for 12-16 weeks (Serdyukov et al, 2003). One-month courses are taught in a sequential fashion, one course at a time, unlike the parallel, standard three-course load typical of semester-long courses. This sequential process allows students to fully concentrate on one subject area. Some authors provide support for the belief that compressed formats might actually be more efficient because “concentrated study may cultivate skills and understanding which will remain untapped and undeveloped under the traditional system” (Scott and Conrad, 1992). In the same vein, psychological research indicates that “deep concentration”, “immersion” and “”undivided intentionality” lead to “optimal experiences” (Csikszentmihalyi, 1982). From the point of view of the instructor, teaching only one section in a sequential fashion allows for a much higher degree of concentration and for better preparation than would be the case when simultaneously teaching three different sections, with at least two different course preparations. In addition, compressed formats, due to the shorter time frame, usually employ a variety of efficiency-enhancing mechanisms to facilitate and accelerate the communication and learning process. Among these mechanisms are: “instructor’s immediate reply and feedback” (Serdyukov et al, 2003); increased reliance on electronic means of presentation, communication and delivery; streamlined but detailed course outlines; precise delineation of student performance expectations and assignments schedule, etc.  National University, located in California and focusing on the needs of adult learners, is one of a few schools across the United States to use the one-month format for both graduate and undergraduate courses. Students enrolled in National University programs are mostly working adults, which adds another non-traditional dimension to the learning environment. Among the characteristics of adult learners cited by different sources are:  Adult learners are largely self-directed: Adult learners perceive themselves as doers, and apply learning to be successful as workers, parents, etc: Adult learners are practically oriented: Adult learners have considerable experience to which they relate new learning:  Adult learners are more concerned about the effective use of time than younger students: Adult learners want to see the immediate applicability of learned materials: Adult learners have previous formal educational experiences and these might have been negative ones: (National Center for Research in Vocational Education, 1987), (Knowles, 1984). Furthermore, according to Perry’s scheme of student development, learners go through four different stages in terms of critical thinking skills: dualism, multiplicity, contextual relativism, and dialectic; with the dialectic stage representing the highest level of sophistication (Perry, 1970). In the dualism stage, students believe that there are clearly right and wrong answers to any question, the teacher’s role is to tell them which one is the right answer; in the multiplicity stage, students realize that there might be different answers, but they are unable to choose between the alternatives; in the contextual realism phase, students realize that answers (opinions) need support to be accepted as valid; in the dialectic stage, learners are able to view a problem from different perspectives and realize that the best answer might depend on the context. At this final stage, students are able to interpret and provide meaning to learned material on their own. Adult learners can most likely be found at the two highest levels of critical thinking. At these stages, learners do not see the professor as the final, omniscient authority figure, but as a valuable point of reference to sound out and validate some of their own findings and opinions.  When I was assigned to teach a one-month graduate course in Global Marketing to students who are mostly fulltime employees, my first thought was: How can I, within this short time frame, achieve the dual goals of providing a rigorous theoretical foundation while at the same time motivating working students to perform graduate-level, applied research work? One of the main requirements in the course was the completion, by groups of students, of a thorough global marketing plan aimed at introducing a new product or service to international markets.  

 

Predicting Impending Bankruptcy Using Audit Firm Changes

Dr. Yining Chen, Ohio University, Athens, OH

Dr. Ashok Gupta, Ohio University, Athens, OH

Dr. David L. Senteney, Ohio University, Athens, OH

 

ABSTRACT

Unlike prior research, we investigate the incremental explanatory power of auditor changes beyond the information conveyed by traditional financial statement ratios in predicting bankruptcy. We find that auditor changes are important in predicting impending bankruptcy and convey important information not reflected in traditional financial statement ratios alone.  In fact, we find compelling evidence that directional knowledge regarding auditor changes such as changing from large accounting firms to small accounting firms provide incremental explanatory power in predicting impending firm failure beyond what is conveyed traditional financial statement ratios and auditor changes considered jointly.  Although the existing relevant literature provides no empirical evidence in this regard to our knowledge, this result is intuitive as one motivation for clients to change audit firms is to seek less conservative professional auditors as smaller audit firms may be as a strategic response to manifestation of the financial statement effects of bankruptcy.  The probability of firm financial failure is crucially important information to shareholders, creditors, management, and the various company stakeholders and the assumption of firm status as one of being a “going concern” is important to the internal and external constituencies as well.  In practice, professional groups of both auditors and security analysts serve as an effective market mechanism for monitoring firm financial health and communicating to the various external constituencies the likelihood of firm failure.  Generally, three approaches are used to predict impending firm bankruptcy: Financial statement ratio-based prediction:  Beaver [1966] and Altman [1968] in addition to many others (c.f., Altman, Haldeman, and Narayanan [1977], Collins [1980], Ohlson [1987], and Platt and Platt [1991]) have provided ample compelling empirical evidence establishing the financial statement ratio-based prediction model specification as the premier specification in forecasting impending firm failure. Because the explanatory variables used in firm failure prediction models should vary systematically between bankrupt and non-bankrupt firms, financial statement-based ratios are intuitively appealing as reflecting underlying economic differences between financially healthy and financially distressed sets of firms. However, there is disagreement regarding which of the various financial statement ratios perform best.  This result is somewhat intuitive as the same set of financial statement-based ratios is unlikely to perform equally well for all firms across their varying economic circumstances in predicting impending bankruptcy.  Qualified auditor opinions based prediction: A substantial body of literature supports the contention that the general as well as the professional public expect that qualified auditor opinions serve as early warnings signals of impending firm bankruptcy (Journal of Accountancy [1982, 1983], and Mednick [1986], and Connor [1986]). Of course, no one expects that qualified auditor opinions serve as perfect signals of firm failure but rather that they serve as good warning signs commensurate with a significant association with actual bankruptcies.  Evidence presented in Hopwood, McKeown, and Mutchler [1989, 1994] solidly supports assertions that auditor opinions qualified for going concern, consistency, and subject-to issues significantly improve the ability of traditional financial ratio-based models to predict impending bankruptcy.  However, neither financial statement ratio-based prediction models nor auditor opinion modifications are very good predictors of bankruptcy when population proportions, differences in misclassification costs, and financial stress levels are considered (Hopwood, McKeown, and Mutchler [1994, p.425]).  Auditor changes based prediction: Another aspect of the auditor-client relationship which bears directly on both auditor qualified opinions and firm failure is auditor changes.  Chow and Rice [1982] provide somewhat marginal evidence that auditor changes are associated with qualified auditor opinions in their examination of a phenomenon commonly referred to as “opinion shopping”.  In addition, Schwartz and Menon [1985] provide contingency analysis results suggesting that auditor qualified opinions as well as auditor changes are associated with subsequent firm bankruptcy.  While possible motivations for audit clients seeking auditor firm changes may include disputes over accounting methods, displeasure over the type of audit opinion, dissatisfaction with an auditor’s failure to detect internal control weaknesses or inaccuracies in accounting records, changes in corporate management, need for additional audit services, and conflict over audit fees, failing firms intuitively have incentives to seek less conservative auditors (i.e., auditors preferring income decreasing applications of generally accepted accounting principles) as financial statement indicators of firm failure begin to appear.  Although the existing empirical evidence indicates that the association between auditor changes and subsequent firm failure is not as strong as the association between auditor qualified opinion and subsequent firm bankruptcy it is none the less significant and may provide additional important source of information about clients more aggressive preference for application of accounting principles beyond that conveyed by the qualified auditor opinion which is useful in explaining and anticipating firm bankruptcy. 

 

Financial Crisis in Emerging Markets: Case of Argentina

Dr. Balasundram Maniam, Sam Houston State University, Huntsville, TX

Dr. Hadley Leavell, Sam Houston State University, Huntsville, TX

Vrishali Patel, Sam Houston State University, Huntsville, TX

 

ABSTRACT

In the past ten years, several emerging markets experienced severe financial crises: Mexico in 1994, Asia in 1997, Russia in 1998, Brazil in 1999, and Argentina in 2001. The patterns in these regions/countries are markedly similar. This paper will discuss the reasons for the financial crises in emerging markets with a focus on Argentina. The paper will discuss the policies of the Argentine government before the crisis, the underlying factors that led to the crisis and the resulting effects of the crisis. Finally the paper will discuss some of the recommendations to solve Argentina’s problems.  The mid 1990’s was marked with severe crises in a number of emerging markets. It started in 1994 with the Mexican crisis.  Asia’s devastating financial crisis hit the global market, followed by Russia’s and Brazil’s financial crises in subsequent years.  Most recently, Argentina has been drastically impacted financially.  Most of these financial crises started with a currency crisis. The common features of the financial crises include the vast appreciation of domestic currencies, skyrocketing interest rates and large capital outflows. By definition, emerging markets are highly dependent on imports and rely on few export activities. This import/export inequity exacerbates exchange rate volatility; any movement of exchange rate significantly affects the market structure. For several years, investors and the IMF considered Argentina as the emerging market’s poster child for success.  Argentina had aggressively privatized state-owned businesses, defeated inflation, strengthened its banking system, and resolved to keep the economy open and the currency stable. Initial results were overwhelmingly positive, but unfortunately, not long-term. Argentina had pegged its currency to the U.S. dollar at a fixed one-to-one rate. As the dollar appreciated so did the peso. This affected Argentina’s exports adversely while it continued importing at the same rate. The country’s account deficits contributed to investors’ loss of confidence in the economy.  This caused a ripple effect leading to enormous capital outflows and the currency collapse (Wucker, 2002).  Krueger’s (2002) review of Argentina’s economy noted a six percent annual growth rate from 1990 and 1997. The trend turned abruptly and Argentina recorded four years of recession. The roots of this slowdown can be traced back to the early 1990s. During this period Argentina had recession and hyperinflation. To solve these problems the convertibility plan was introduced and inflation dropped to single digit inflation in only three years. Argentina’s policies acknowledged the lessons learned from the Mexican and Asian crisis. Argentina took steps to strengthen its banking system and it accepted foreign ownerships. Despite these reforms, Argentina was in trouble in a short time. There are two main factors that contributed to the problem, the weak fiscal policy and overvalued currency. The federal government constantly had deficits in the 1990s and their fiscal position was declining. The consolidated budget deficit was around 2.5 percent of GDP during 1997 and 1998, while the economy was booming. The deficit would have been still higher if the privatization receipts had not been used to finance current expenditure. One of the reasons for the problems in the public finance was the unfunded pension liabilities.  Another was the increasing gap in the wages of the public and the private sector. Due to this differential, the public sector employment was 12.5 percent of the labor force, which is much higher than Brazil, Chile, Indonesia, The Philippines and Thailand. Due to the weak public finance, the consolidated public debt burden started mounting. Argentina did not have the capacity to shoulder the increasing debt burden. Since Argentina could not raise enough tax revenue, it was more exposed to external shocks and market sentiments, and it had to service most of the debt in foreign currency which was difficult due to the low exports. By the end of 2001 the debt-to-GDP ratio reached 130 percent Krueger analyzed the lessons learned from the crisis prevention and resolution. He recommended the IMF closely examine the debt dynamics of the country since strong currency boards cannot exist without strong fiscal and structural policies. His recommendation was for a mechanism to help the country exit from the unsustainable debt dynamics. He attributed Argentina’s fall to weak fiscal policies, unfavorable external environment and shocks, overvaluation and the debt burden. (Krueger, 2002).  Wise and Pastor (2001) focused on the recession and political position in Argentina; the illusions of macroeconomic stability created by the convertibility plan; and a comparison of other Latin American countries to Argentina. After Brazil’s currency devaluation in January 1999 Argentina entered a deep recession.  According to Stiglitz (2002) there have been several explanations for Argentina’s crisis.  One of them is that the crisis had been developing for several years in Argentina and it at last erupted in December 2001, since than the economy has worsened.  Stiglitz found many that believed this was caused by political corruption and huge deficits. Some economists suggest that the crisis would have been avoided if Argentina had followed the IMF’s warning to rigorously cut spending.

 

Measuring and Reporting of Intellectual Capital Performance Analysis

Dr. Junaid M. Shaikh, Curtin University of Technology Malaysia

 

ABSTRACT

This paper reviews several internal and external measures of intellectual capital. Internal measures – such as the Balanced Scorecard – are used to manage, guide and enhance a firm’s intellectual capital so it can be leveraged to generate greater value for the company. External measures, which include market-to book value, Tobin’s Q and Real Option Theory focus on investors and others attempting to value a company (provides a signal to external parties). Here, greater emphasis is placed on external reporting and consequently is subject to accounting standards and financial regulations - although, a specific accounting standard that adequately addresses intangibles has yet to be developed. There is indeed much to support the assertion that IC in the new century will be instrumental in the determination of enterprise value and national economic performance. Stemming from this awareness of the value of know-how is a drive to establish new metrics that can be used to record and report the value attributable to knowledge within an organization. The task has been given impetus by the fact that early work appearing in the accounting financial reports of Swedish companies involves the application of non-financial metrics and focuses on intangible assets. This represents a significant departure from traditional financial and management accounting orthodoxy. Intellectual capital is becoming the preeminent resource for creating economic wealth. Tangible assets such as property, plant, and equipment continue to be important factors in the production of both goods and services. However, their relative importance has decreased through time as the importance of intangible, knowledge-based assets has increased. This shift in importance has raised a number of questions critical for managing intellectual capital. How does an organization assess the value of such things as brand names, trade secrets, production processes, distribution channels, and work-related competencies? What are the most effective management processes for maximizing the yield from intellectual assets?  Virtually every sector of the economy has felt the impact of increased intellectual capital. In the steel industry the labor cost per ton of steel has been reduced significantly. In the airline industry reservation systems have become a major source of revenue. In manufacturing, product design is handled on computers without the need for drawings or markups. The list goes on and on. In addition, intellectual capital has contributed to the creation of whole new types of businesses and ways of doing business. In fact, many companies rely almost completely on intellectual assets for generating revenues. For example, the software industry is primarily knowledge based with most products never taking a tangible form; being created and delivered electronically.  The Australian Accounting Standards Board has announced that on their current work program, the Intangible Assets project has been ranked as the highest priority.  This paper provides a background discussion on intellectual capital. Intellectual capital covers a multitude of areas and economists, accountants and standard setters are yet to agree on a global definition. It is often referred to as intangible assets or intangibles. A simple definition of intellectual capital is: Knowledge that can be converted into value.  Another definition states: Intellectual capital is intellectual material—knowledge, information, intellectual property and experience that can be used to create wealth. Researchers first became interested in defining intellectual capital in the 1960s. But the demand for information at that time was not strong enough to drive continued research and development. However, in the last decade the change in the global economy—from being manufacturing and industry-based to being knowledge-based—created renewed interest in intellectual capital and increased demand for measuring and reporting its effect on business and profitability.  Intellectual capital includes inter alia, inventions, ideas, general know-how, design approaches, computer programs, processes and publications. Understanding the different components helps improve its management and use at a strategic and operational level.  One of the most popular models for classifying intellectual capital is the Hubert Saint-Onge model developed in the early 1990s. It divides intellectual capital into three parts: Human capital; Structural capital; and  Customer capital.  Brooking (1996, p. 13) suggests that intellectual capital is comprised of four types of assets: (1) market assets, (2) intellectual property assets, (3) human-centered assets, and (4) infrastructure assets. Market assets consist of such things as brands, customers, distribution channels, and business collaborations. Intellectual property assets include patents, copyrights, and trade secrets. Human-centered assets include education and work-related knowledge and competencies. Infrastructure assets include management processes, information technology systems, networking, and financial systems.  The recent focus on knowledge and innovation and their impact on the economy have created a renewed interest in intellectual capital—its importance to business, how it is defined and measured and the role of government.

 

Executive Compensation Contracts: Change in the Pay-Performance Sensitivity within Firms

Dr. Jinbae Kim, Korea University, Seoul, Korea

 

ABSTRACT

Contract theory predicts that the optimal compensation contracts depend not only on firm-specific factors but also on CEO-specific factors as well. However previous research in compensation generally focuses on the effects of firm-specific factors on the pay-performance sensitivity in compensation contracts. Using compensation data on 52 firms that have two CEOs in the sample period who have served for at least eight years, this paper investigates the changes in the pay-performance sensitivity within firms at the time around the CEO departure. Empirical evidence shows that firms change their pay-performance sensitivity after the CEO leaves, in a manner consistent with predictions from the contract theory.  Previous research provides various insights into the pay-performance sensitivity in executive compensation contracts. Jensen and Murphy (1990) measure the sensitivity of dollar changes in executive compensation to dollar changes in shareholder wealth and claim that the relationship is not strong enough to give CEOs adequate incentives. Haubrich (1994) and Baker and Hall (1998) argue that the sensitivity level reported by Jensen and Murphy may be enough to create incentives for the CEO and may be optimal under certain situations. Numerous other studies including Natarajan (1996), Baber et al. (1999) and Prendergast (2002) use the pay-performance sensitivity measure in order to empirically test various hypotheses. Gibbsons and Murphy (1990), Janakiraman et al. (1992) and Aggarwal and Samwick (1999) use the pay-performance sensitivity to examine the relative performance hypothesis. Lambert and Larcker (1987) and Ittner, Larcker and Rajan (1997) use the pay-performance sensitivity to investigate the relative weights on performance measures.  Agency theory suggests that there are many factors that influence the pay-performance sensitivity.  It is well documented that firm-specific factors such as riskiness, size, growth opportunities, and governance structure affect the pay-performance sensitivity. The theory also suggests that CEO-specific factors should be an important determinant of optimal compensation contracts. It implies that firms should adjust their pay-performance sensitivity to reflect the ability and characteristics of the CEO. When the CEO leaves, the firm should alter the pay-performance sensitivity to fit the caliber of the new incoming CEO.  Previous studies in compensation research, however, generally ignore the possibility of changing the pay-performance sensitivity when executives are replaced. Studies including Jensen and Murphy (1990) and Gibbsons and Murphy (1990) pool data over cross-sections and time for regression analysis. The research design does not allow distinction between within-firm sensitivity change and cross-firm change. The other studies including Lambert and Larcker (1987) and Ittner, Larcker and Rajan (1997) specifically recognize the differences in compensation contracts across firms and run firm-specific regressions. While this research design acknowledges that the sensitivity differs across firms, it implicitly assumes that the sensitivity stays constant during the sample period even if CEOs are replaced. While several studies incorporate multiple intercepts into the regressions allowing different levels of base compensation for different CEOs, few studies allow different slopes, i.e. the sensitivity, in the regressions. The question of sensitivity changes within firms around CEO departures has never been investigated. This paper probes that question. Using compensation data of 52 U.S. firms that have multiple CEOs who have served for at least eight years during the sample period, I find that there is a significant shift in the pay-performance sensitivity within firms around CEO departures. This result confirms the theoretical prediction that optimal compensation contracts should be adjusted according to individual CEOs. In the light of this result, it is necessary to interpret more carefully the results of previous compensation research that frequently ignore the differences in individual compensation contracts or that assume the contracts are the same even if there is a CEO change. There is no consensus on how to measure the pay-performance sensitivity. Most previous studies estimate the pay-performance sensitivity by the slope coefficient in a regression of compensation on performance. The sensitivity measured in this way differs depending on how performance is measured. Some studies use absolute dollar amounts to measure performance. Jensen and Murphy (1990), for example, regress the change in annual CEO compensation on changes in shareholder wealth. Holmstrom (1992) calls this type the “arithmetic” sensitivity. Most other studies use rates of returns for performance measurement. ROE or ROA (or their annual changes) are most frequently used. Holmstrom (1992) calls this type the “geometric” sensitivity. Rosen (1992) claims that the geometric sensitivity is superior to the arithmetic sensitivity. Following Rosen’s suggestion, I measure the performance using the annual change in return on assets.  The measure is used by numerous previous studies. As for the compensation measure, the change in log of cash compensation is used to make a logical comparison. It is equivalent to the continuously compounded rate of change in cash compensation. Then the pay-performance sensitivity is the slope coefficient in the following firm-by-firm regression.

 

Political Constraints and the IRS’ Tax Enforcement Actions

Dr. Vijay K. Vemuri, Long Island University, Brookville, NY

Prof. Donald P. Silver, Long Island University, Brookville, NY

 

ABSTRACT

The taxpayer compliance research mainly focuses on the taxpayers’ compliance decisions. However, taxpayer compliance is influenced by both veracity of the taxpayers and the enforcement strategies of the IRS. The perception of excessive coercive enforcement procedures may result in curtailment of enforcement authority of the IRS. The political processes available to oppose and reform enforcement policies differ for individuals and corporations. The opposition of corporations tends to be swift, well orchestrated, and concentrated on a particular enforcement issue. On the other hand, the opposition of the individuals may accumulate into taxpayer antagonism and may call for extensive reform of tax administration including enforcement policies. This paper analyzes the civil penalties assessed and abated to examine differences in penalties on individuals and corporations for the years 1978 to 2002. The results indicate that systematic differences exist in the incidence and the amount of penalties for these two groups. Further, time series of penalties for individuals exhibits significant auto correlation, suggesting a possibility of implicit budgets for penalties. The tax gap, the difference between the taxes the government is expected to collect, if every taxpayer reported income and paid taxes honestly and actual taxes received, is of considerable interest to legislators, the IRS, economists, and the popular press. A tax gap shifts some tax burden from the dishonest taxpayers to the honest taxpayers. The research on tax compliance has provided insights into the decision processes of the taxpayers, their economic motivation to comply with the tax laws and other variables that may explain the tax compliance behavior. Most of the research conclusions about determinants of income tax compliance relies on behavior of taxpayers only. Tax compliance, however, is the result of the actions of two key players taxpayers and the IRS. The income tax compliance by taxpayers depends on the veracity of taxpayers as well as the enforcement policies and procedures of the IRS. Theoretical or empirical study of income tax compliance that concentrate on individual taxpayers’ compliance decision processes may not yield correct predictions. It will only analyze taxpayer decisions, but not the equilibrium interactions between taxpayer compliance and enforcement by IRS. This paper studies the aggregate enforcement statistics of the IRS and theorizes how the legislative, political and public opinion constraints facing the IRS may shape its enforcement policies and procedures.  The IRS has many mechanisms to enforce compliance to tax laws: tax audits, information reporting, withholding of taxes and taxpayer education and assistance (Slemrod and Bakija 1996). Tax audits, probably the most visible of the enforcement mechanisms, result in abatements of tax assessments or tax penalties. Very little is known about the decision processes of the IRS that lead to abatements and penalties of tax assessments. In this paper we will analyze the aggregate tax assessment, abatement, and penalty statistics. This analysis is needed to better understand the IRS’ enforcement behavior in the taxpayers’ compliance problem. Decision and game theoretic models have analyzed the taxpayer’s compliance decisions with marginal tax rates, income, penalties, and the probability of detection (Alligham and Sandmo 1972, Greenberg 1984, Reinganum and Wilde 1985, 1986). The predictions of these models, however, crucially depend on the underlying assumptions. For example, some models predict that taxpayer uncertainty about IRS’ audit parameters increases compliance (Beck and Jung 1989). While Cronshaw and Alm 1995 show that if the IRS is uncertain about true income and the taxpayer is uncertain about audit technology, compliance will reduce with an increase in taxpayer uncertainty about audit polices they face. Rhoades 1997 concludes that the Taxpayer Bill of Rights 2, enacted in 1996, may decrease taxpayer compliance due to the cost of false detection imposed on the IRS. Empirical research on taxpayer compliance is constrained by access to taxpayers’ income and expense information. Many studies rely on aggregate data to analyze taxpayer compliance. A few studies have used Tax Compliance and Measurement Program (TCMP) data that is aggregated by audit classes or zip codes. Long and Swingen 1989 use TCMP data aggregated by audit classes and observed that third-party information reporting reduced under-reporting errors. Klepper and Nagin 1989, and Kamdar 1995 confirm an increase in taxpayer compliance due to third-party information reporting. Kamdar’s analysis contradicts the widely held belief that higher marginal tax rates result in lower compliance.  Many political scientists studied the political constraints facing the IRS in tax administration. Scholz 1989 is an excellent review of the political atmosphere confronting the IRS, especially in its enforcement policies and strategies. It illustrates the complex interaction of legislative and administrative processes, fiscal policies, and lobbying efforts and how they influence in shaping the tax bills and, especially, enforcement policies. Unlike many other enforcement agencies, the IRS has few political allies and lacks supportive constituency. This relative political isolation has made IRS vulnerable to legislative proposals limiting its enforcement authority. It illustrates the political pressures in the IRS’s annual budget deliberations. In particular, despite the cost-effectiveness of IRS examiners, the prevailing political atmosphere has dictated the number of examiners employed in the IRS’ enforcement activities.

 

 

Asymmetry in Farm-Milled Rice Price Transmission in the Major Rice Producing States in the U.S.

Sung Chul No, Southern University and A&M College, Baton Rouge, LA

Dr. Hector O. Zapata, Louisiana State University, Baton Rouge, LA

Dr. Michael E. Salassi, Louisiana State University, Baton Rouge, LA

Dr. Wayne M. Gauthier, Louisiana State University, Baton Rouge, LA

 

ABSTRACT

Over the past three decades, agricultural economists have tested whether the retail price response to price increases at a lower market level is similar to the retail price response to price decreases at the same market level. The majority of the empirical studies have focused on farm-retail price transmissions.  However, the price transmission effects between the farm and milling level are as important as the price transmission effects between the farm and retail level, especially for rice.  Milling transforms rough rice into the more desired milled rice. It is the milled price that transmits changes in farm prices to the final consumers. Thus, the paper tested the null hypothesis that decreases in milled prices resulting from decreases in farm prices are as fast as increases in milled prices resulting from increases in farm prices in the major rice producing states, Arkansas, California, Louisiana, and Texas.  Adopting a newly developed econometric methodology, momentum-threshold autoregressive model (M-TAR), the paper found strong evidence indicative of symmetric pricing behavior for milled rice in the states of Arkansas, California, and Texas.  For Louisiana rice, the results suggested otherwise: the Louisiana mill prices responded much faster when the milling margins tightened due to farm price increases than when the margins became wide due to farm price decreases.  For over three decades, agricultural economists have tested various markets for evidence of retail price asymmetry.  Tests are designed to determine whether the retail price response to price increases at a lower market level is similar to the retail price response to price decreases at the same market level (Tweeten and Quance, 1969; Houck, 1977; Ward, 1982; Kinnucan and Forkker, 1987; Reed and Clark, 1998; Cramon-Taubadel; 1998, Vande Kamp and Kaiser,1999).  If the retail price response is the same, the market is symmetric.  If the response differs, the market is asymmetric. The majority of the empirical studies above have focused on farm-retail price transmissions as influenced by the differential impacts associated with changes in retail demand versus farm supply upon price responsiveness (Gauthier and Zapata, 2000).  The price transmission effects between the farm and processor level are as important as the price transmission effects between the farm and retail level for field crops, especially rice.  Milling transforms rough rice into the more desired milled rice.  Farm-milled price spreads are sensitive to changes in rough rice prices.  The millers are able to increase their margins as farm prices increase and similarly decrease margins as farm prices fall.  It is the milled price that transmits changes in farm prices to the final consumers.  But to our knowledge, little research has been published on analyses of asymmetry in farm-mill price transmission effects.  Rice is an important revenue generating crop in the U.S.  For instance, rice in Arkansas comprises the largest field crop the state produces, accounting for more than 490 million dollars in farm revenue in 2000 (ERS/USDA, 2001). Shifting more than 44% of its production to the world market in 2002, the U.S. was the third largest exporting country after Thailand and Vietnam.  Since 1989, U.S. rice consumption has increased because of consumer desires for healthier foods, changes in the country’s demography, and other factors favorable to rice.  Given the increasing importance of rice to regional producers and consumers, information about the transmission effects between farm and milling sectors would be valuable in the design of government policies impacting prices.  The objective of this study is to determine whether price transmission asymmetries exist between the farm and mill levels in the major rice producing states.  More specifically, this paper reports on a test of the null hypothesis that decreases in milled prices resulting from decreases in farm prices that leads to increases in the milling margins are as fast as increases in milled prices resulting from increases in farm prices in the major rice producing states, Arkansas, California, Louisiana, and Texas.  The paper is organized as follows. The econometric approach is first discussed. Secondly the paper describes data. Thirdly, the empirical results of the analysis are presented.  Lastly, the paper summarizes and concludes.  The Houck’s segmentation procedure was the conventional engine of analysis for assessing asymmetry for over 30 years despite of its multicollinearity problem. The momentum-threshold autoregression models (M-TAR) developed by Enders and Granger has been adopted to avoid the limitations to analysis associated with multicollinearity testing the null hypothesis of symmetric price adjustments at farm and milling levels.  The M-TAR model is a more general specification of the class of error-correction models (ECM) reported in the cointegration literature (Engle and Granger, 1989).  According to the Engle-Granger (1989) theorem, when two time-series (X1t and X2t) follow non-stationary processes, both short and long run relationship between the two series can be adequately modeled under certain circumstances as follows:

 

Application of Thermodynamics on Product Life Cycle

Dr. Kuang-Jung Tseng, Hsuang Chuang University, Hsin Chu, Taiwan, ROC

 

ABSTRACT

Thermodynamics is a physical science deal with the interactions of matter and energy.  It contains the law of the energy conservation and the law of entropy that tends to increase in a closed system. The author has applied these laws to constrain the processes by which raw materials are transformed into consumable goods and the goods are distributed afterwards. The size of the sales force can be determined by introducing the temperature concept into thermodynamics to represent hot or cold consumable goods. With the concept that entropy increases in a closed system, which is similar to the distribution of consumable goods into the system, an ideal innovation of the product life cycle (PLC) can be determined.  In order to modify the phenomenon of the product life cycle, a parameterα,  which relate to the management efforts as well as the development and services efforts , is introduced. As a result, the PLC curve can be manipulated due to the value of α. The parameterα is related to the product quality, sales services and the degree of newness of the application of the product. While physical sciences deal with the interactions of matter and energy, economics can be said to deal with the manufacturing and exchange of goods and services.  Marketing management is about how to locate the customers and distribute goods efficiently accordingly; hence, a proper production plan is required.  Because goods and services incorporate matter and energy, thermodynamics are clearly relevant to economics and marketing management [Ayres and Nair, 1984]. In particular, one can expect the laws of thermodynamics to impose constraints on economic processes.  The first law of thermodynamics states that energy is conserved [Hsieh 1975]. Ayres and Kneese [Ayres and Kneese, 1969] also mentioned that waste emissions are a consequence of resource extraction can be inferred directly from the conservation of mass or energy. In order to transform mater into consumable goods required work and that create additional heat in to the system. If not recycled, the products will become pollutants sooner or later.  The second law of thermodynamics can be states as: The entropy S, defined by the equation dS = dQ / T , is a function of state; The entropy increases in an irreversible process and remains constant in a reversible process.  Where Q is heat , T is temperature.  The entropy of the environment tends to be increased naturally. For the human society, the entropy is reduced at the cause of increasing the entropy of the environment.  Marketing management emphasizes organizing the resources of the firm to meet customers’ needs so as to produce adequate products and additional services and so forth. The process during distributing goods costs money. To provide service after a sale has been completed also require money. In order to achieve a profitable sale of a certain product, a firm needs to have good marketing plans both on manufacturing, distributing (marketing) and services. From 50’s to 90’s the product life cycle theorem has been widely discussed and used for the forecasting and planning for the production plan. [Parsons 1975], [Forrester 1959],[ Levitt 1965], [Smallwood, 1973] , and [Paley, 1994]. The PLC generally is defined to have five stages, the introduction stage, the growth stage, the maturity stage, the saturated stage and the decline stage. Paley [Paley, 1994] summarized some useful simple guidelines for managers to identify the stage of the PLC where the situation of a product may currently located.  But others have doubts about the limited usage of PLC theorem [Day, 1981] , [Grantham , 1997] , [Wood, 1990], [Dhalla and Yuspeh , 1976], [Hiam, 1990], [Polli and Cook, 1969], [Mercer , 1993] . Day [Day, 1981] raised the questions about five basic issues must be confronted in any meaningful application of the PLC theorem: 1. How should the product-market be defined for the purpose of life cycle analysis?  2. What are the factors that determine the progress of the product through the stages of the life cycle? 3. Can the present life cycle position of the product be unambiguously established? 4. What is the potential for forecasting the key parameters, including the magnitude of sales, the duration of the stages, and the shape of the curve?  5. What roles should the product life cycle concept play in the formulation of competitive strategy?  It is the purpose of this paper, trying to apply the physical laws of thermodynamics to answer these questions. Because, to produce and distribute goods require resources and it is the same for customer services. The resources may be in the form of matter, energy or cash. In particular, one can expect the laws of thermodynamics to impose constraints on marketing processes as they do on physical processes.  It is clear that the law of conservation of matter and energy, the first law of thermodynamics, implies that mater or energy are being consumed when goods are being produced along with energy loses. The entropy law of thermodynamics can be used to determine how goods are diffused or distributed into the market.  The first law of thermodynamics is conservation of energy. Ayres and Kneese [Ayres and Kneese, 1969] also mentioned that waste emissions are a consequence of resource extraction can be inferred directly from the conservation of mass or energy. In order to transform mater into consumable goods required work and that create additional heat in to the system.

 

Regionalization and Specialization: A Theoretical Contribution

Dr. Charbel M. Macdissi, CEDE, University of Antilles-Guyane, Guadeloupe

 

ABSTRACT

The study of the regionalization and specialization deals with the behavior of a country inside the supra-national bloc and the correlations that may exist between the fact to belong to these institutions and the development of exchanges between their countries members. The institutional frame can take the form of common market agreements, cooperation or integration but also of commercial arrangements among the states. Thus, can we consider the institutional factors and the proximity as determinants of the commercial exchanges?. Thanks to the institutions and the common agreements between the countries of an area the intra-regional cooperation can and must expand also to inter-regional relations. However, this approach raises many questions: can we talk about a regional comparative advantage?. Must one be affiliated to one or several supra- national institutions?. Isn’t there sometimes a contradiction between the objectives followed by these different institutions?. How can we proceed to the intensification of intra-regional exchanges? How can we choose the specialization in order to develop the cooperation?. What kind of barriers could delay or even stop the regional cooperation?  The purpose of this paper is to try to contribute to answer some of these questions, to identify the new economic challenges of the regional cooperation and to study the main barriers to the constitution of intense regional exchanges.  The world trade has known an important growth in the last years: 6% in real and annual average between 1990 and 2001(WTO, 2002) and 11% in 2000. In fact, with 5984 billion dollars of exportation of commodities and 1460 billion dollars of exportation of commercial services in 2001, the globalization of the exchanges has affirmed itself.  However, and parallel to this globalization, we observe the development of a regional way, with the setting up of regional bloc, comparable to the success of the European Union.  Thus, in the last years we witnessed the formation of NAFTA(North American Free Trade Area)(Whalley, 1992),  MERCOSUR (South America common market), the Caribbean States Association and recently the American Free Trade Zone on the American continent.  In 2001 the trade intra the Latin America and the Caribbean developing countries represented 17% of their total exportations (60.8% toward North America and 12.1% toward West Europe).  Though these intra-regional exchanges are relatively less important than the intra-European (67.5%) or intra-north America ones (39.5%), they deserve a particular attention and a profound analysis in order to know their determinants and their weight in the constitution of regional blocs and vice versa.  The object of this paper will be, then, to present and analyze the specific determinants of the RCA(Regional Comparative Advantage)and their role in the explanation of the regional exchange  and the regional integration.  Jointly or separately, the traditional determinants of the supply and demand provide an explanation, which was not accepted without criticism, of the specialization and of the international commercial relations.  However, and taking into consideration the development of these latter, it is always imperative to proceed to other research and to see a new way likely to lead to the explanations of  trade  between the countries and more specifically to the  intra-regional exchanges.  In this view, we shall approach the development of this analysis treating the specific role of non-traditional determinants (dimension and proximity) in the Regional Comparative Advantage and the intra and inter exchanges.  The dimensional approach shall be treated on a threefold level: the institutional dimension, the political dimension and the demographic and geographic dimension.  The study of the institutional dimension deals with the behaviour of a country inside the supra-national and the correlations that may exist between the fact to belong to these institutions and the exchange developments between their countries members. The institutional frame can take the form of common market agreements, cooperation or integration but also of commercial arrangements among the states (Fujita, Krugman and Venables, 1999). Thus, can we consider the institutional factors as determinants of the commercial exchanges?  The regional institutional frame, like the regional common market, or the regional integration association offers to the countries the possibility of reorganization of their factorial endowments and their individual advantages, for a regional specialization and intra-regional exchanges.  Could we have integration and RDI  (Regional Division of Investments)?  That means a specialization based on the comparative advantage of each country which is integral part of the regional comparative advantage. If countries of an area choose to specialize in differentiated products, the economic integration enhances the intra-branch (Grubel and Lloyd, 1975; Davies, 1977) between these countries.  Thanks to the institutions and the common agreements between the countries of an area the intra-regional cooperation can and must expand also to inter-regional relations.  However, this approach raises many questions: must one be affiliated to one or several supra- national institutions?  Isn’t there, sometimes a contradiction among the objectives followed by these different institutions? 

 

Critique and Insight into Korean Chaebol

Dr. Jonathan Lee, University of Windsor, Windsor, Ontario, Canada

 

ABSTRACT

This paper closely examines the management practices of one of Korea’s largest chaebols. Critique of this chaebol is based on interviews with number of managers still working for the company. It is clear from the discussion that there are many challenges that face the chaebols; however, these challenges are not necessarily unique to Korean chaebols.  South Korea (Korea) is the 11th largest economy in the world in terms of gross national product and 13th largest trading country as of 2002 (1). It has been and remains one of the fastest growing economies in the world. Many of the Korean chaebols or conglomerates are well known throughout the world, such as Samsung, Hyundai, LG and others. Although many consumers do not realize that they are Korean companies, some of these companies are among the world’s largest producers and exporters of memory chips, and they rank among the largest and most efficient steel manufacturers and shipbuilders in the world. These conglomerates also produce a wide range of products such as home electronics (TV, DVD, Computers, etc.) as well as mobile phones, machinery, cars, and so forth. In recent decades, chaebols have been extremely successful in carving out market shares in Europe, Asia, and North America. In effect, chaebols have been the engines that have driven the Korean economy to where it is today. Very few businesses in Korea can escape from the influence and grasp of these giant entities.  While the past success of these organizations is undeniable, the experience of recent years has brought renewed attention to the workings of chaebols, from both within and outside of Korea.  The Asian financial crisis of the late 1990s tarnished some of the luster of Korea’s economic “miracle.”  Although the crisis is past, the Korean economy may well be at a crossroads, in which past practices, even those by which Korea excelled in the past, should be critically evaluated.  This paper closely examines the management practices of one of Korea’s largest chaebols.  This conglomerate has an extensive global presence, and its name and products are easily recognizable throughout the world. Critique of this chaebol is based on interviews with number of managers still working for the company.  Initially reluctant to divulge any information, which might discredit their organization, these interviewees agreed only after their trust was gained, strict anonymity was granted, and the chaebol was to remain undisclosed.  The critical analysis of this company began with an open question posed by this researcher to each manager, who was then open to follow-up questions to elicit more detail.   Many of the problems associated with this company are not necessarily unique; in fact it reflects many of the shortcomings of any large conglomerates around the world. The question posed was, ‘are there any problems or areas of improvements that are needed in your company?’ This fundamental question became the basis for a discussion on this chaebol; however, each manager was also certain that many of these problems were also prevalent in other Korean chaebols, and they suggested repeatedly that these challenges were not unique to their particular company. Their hope was that by discussing these issues, they would begin to acknowledge the problems and begin addressing them. As employees and managers of this company, they also realized that the responsibility for these problems weighed heavily on them.  The process of discussing their company in a critical fashion was very difficult for them, given that they have been employed by the multinational for many years.  Clearly they all loved the company and were extremely loyal to it. At times they were reluctant to openly talk about the problems for the fear of being exposed or of being disloyal.  It is also contrary to Korean culture, and in particular, to Korean business practices to openly talk about these problems since loyalty to companies is one of the most important value characters of an employee. Although many things are slowly changing in Korea, most chaebols still offer some form of lifetime employment or long-term commitment to their employees. Once one joins these companies, one is joining their family.  Companies treat their employees very paternalistically. During the discussion they also pointed out many strong points regarding the company. Certainly, the success of many of Korean chaebols in the world markets is indicative of their many strong points. However, the focus of this research was to examine ways to improve these powerful Korean conglomerates and begin to address these problems one by one in order for the companies to stay competitive and survive in the increasing difficult world enviroment. This paper examines nine main problems or challenges that face Korean chaebols today. These challenges are outlined in figure 1-1 and they become the basis for discussion and analysis. One of the most frequent complaints that are expressed by managers as well as many employees of Korean chaebols is the long working hours. It is not uncommon for employees to work well past 6 p.m., and for managers to work until 9 p.m. on a daily basis. Furthermore, there are socially obligatory drinking hours after work; thus, many workers go to neighborhood bars after the work hours to continue to talk and discuss with their colleagues and ‘decompress’ in preparation for home. This is repeated day after day and can be very grinding on the workers.  The long duration of the workday has been the focus of chronic complaints of many managers working for the Korean chaebols. These long hours can be very stressful to an individual, and the lack of time spent with family becomes a source of complaint for many spouses. Recently, shortening the work hours has been seriously considered and government was ready to imposed laws to limit them; however, there has been stiff resistance from the employers who are fearful that fewer working hours mean less output and loss in profit. Thus, for many employees, the hope of shorter workdays is still a chimera, and long working hours remain the norm. Some managers expressed that these hard and long grinding working hours and the difficult nature of the job that they need to perform on a daily basis was more demanding than the mandatory military service that is required of all Korean males.  The difficulty of dealing with these long working hours is more serious when one considers the reason behind it. According to the mangers, the general productivity of the workers was very low on account of the length of the workday.

 

Golf, Tourism and Amenity-Based Development in Florida

Dr. Alan W. Hodges, University of Florida, Gainesville, FL

Dr. John J. Haydu, University of Florida, Gainesville, FL

 

ABSTRACT

Settlement patterns in the United States are increasingly based on environmental, cultural and recreational amenities and the perceived quality of life, rather than economic opportunities. This is especially true for the retired population, who are not dependent upon earned income. The state of Florida has experienced very rapid growth over the past 50 years due to tourism and amenity-based development. Many residential developments now feature golf courses and other recreational amenities. Golf is a highly popular recreational sport in America, with participation by over 20 percent of the adult population. In Florida, there are currently over 1300 golf courses, more than any other state.  Golf is an important activity associated with the large tourism industry in Florida. Economic characteristics and regional impacts of golf courses in Florida in the year 2000 were evaluated based upon survey data together with other published information and a regional economic model constructed using Implan. Survey results indicate that residential developments were part of 54 percent of Florida golf courses, with some 756,000 residential units having a total value of $158 billion. Golf industry employment was 73,000 persons. The book value of assets owned by golf courses was $10.8Bn, including land (58%), buildings and installations (26%), vehicles and equipment (10%) and golf course irrigation systems (6%). Land area owned by golf courses was 205,000 acres, with 147,000 acres in maintained turf. Travel expenses by golf playing visitors in Florida were estimated at $22.9Bn, of which $5.4Bn were attributed directly to the golf experience. Based on an Implan model for Florida, these expenditures had a total impact on the Florida economy of $9.2Bn in personal and business net income (value added) and 226,000 jobs. Golf courses had a positive effect on nearby property values in 18 selected counties, with total values for residential properties near to (within one mile of) golf courses averaging nearly $20,000 higher than other properties not near a golf course. Total county property tax revenues attributable to these higher property values were estimated at $214 million, based on county-specific taxation rates. Population growth, migration and settlement patterns in the United States are increasingly driven by considerations of environmental, cultural and recreational amenities, and the perceived quality of life. Numerous studies have examined the role of amenities in regional growth and economic development, as summarized in bibliographies by Dissart and Deller (2000) and Marcouiller et al (2002). McGranahan (1999) showed that natural amenities, such as warm and sunny winters, moderate summer temperatures and humidity, topographic variation, and water area, explained a significant share of population growth in nonmetro U.S. counties during the period 1970-96. Similarly, Mueser and Graves (1995) examined population redistribution in the United States from 1950 to 1980, and found that migration trends over this period were tied to both changes in income and household preferences for amenities, with the latter perhaps having a greater role. Deller et al (2001) used data from 2,243 U.S. counties to evaluate a range of factors related to population and economic growth in rural areas, and found that all five of the environmental amenity attribute measures played a significant role in regional economic growth, including climate, land, water, winter recreation and developed recreational infrastructure. Nord and Cromartie (1997) concluded: “In studies that estimate the effects of economics and location factors on migration while controlling for effects of other factors, natural amenities emerges as the strongest single factor associated with net  immigration on to rural counties”.  Nowhere has the trend toward amenity-based development been more important than in the state of Florida, where the population has grown from less than three million people to more than sixteen million in the last 50 years.  This growth was due, not to the traditional economic bases of agriculture, resource extraction or manufacturing, but rather to natural and human-created amenities, such as a warm climate, tropical beaches, abundant water resources, and large entertainment attractions. As retired people and tourists came to experience these amenities, income flowed into the state, demand for services expanded rapidly, creating new jobs and attracting younger people into the state as well as elderly migrants. Studies have demonstrated that Florida is widely perceived as an area of high quality natural amenities and desirable places to live. Blomquist et al. (1988) used a hedonic methodology that included 16 climate, environmental and urban amenities, and implicit prices to calculate quality of life indices expressed in dollar values for 253 urban areas. Florida had six urban areas in the top 50, was tied for first place (with California and Colorado) and had no urban areas in the bottom 50.  The quality of life index puts the top Florida urban areas $5,500 to $7,000 per household above the lowest ranked urban area. Furthermore, in Nord and Cromartie’s (1997) study, all the counties of Florida were in the highest quartile on a summary index of natural amenities.  One of the major reasons for these trends is the demographic shift toward a larger retiree population and the importance of non-employment income. This large body of potential migrants is motivated to move for reasons other than work, and because their incomes (savings, pensions, dividends, etc) are not tied to jobs or particular places, these people can and do select places to live that are based on desireable features that are attractive to them.

 

Explaining Embraer’s Hi-Tech Success: Porter’s Diamond, New Trade Theory, or the Market at Work?

Dr. Juan Espańa, National University, La Jolla, CA

 

ABSTRACT

This paper analyzes alternative theories of competitive advantage and their ability to explain the commercial success of the Brazilian aircraft manufacturer EMBRAER in world markets. A survey of EMBRAER’s history reveals a company created by the Brazilian government out of national security considerations but able to transform itself over time into a vibrant commercial enterprise. Key to this successful transformation was the 1994 privatization and the formation of a number of strategic and operational alliances with international partners. EMBRAER’s case seems to transcend the neat demarcation lines drawn by adherents of existing competitive advantage theories, raising important questions about the conventional wisdom on the issue of how to create and sustain competitive national industries or firms.  Empresa Brasileira de Aeronautica S.A. (EMBRAER) is a manufacturer of commercial and defense aircraft founded in 1969 by a military Brazilian junta intent on providing Brazil with its own aircraft-manufacturing ability. It was conceived as a mixed enterprise, with the Brazilian government holding 51% of the voting shares and the rest dispersed among private investors. Production operations began in 1970 and the first Xavante, a trainer and attack/reconnaissance airplane was produced in 1971 under license by Aermacchi, the Italian aircraft manufacturer. The new enterprise broke even in 1971 and remained profitable until 1981. EMBRAER is headquartered in Sao Jose dos Campos, near Sao Paulo. From its humble beginning, it has grown into the world’s fourth largest producer of commercial aircraft with total revenues and net income of $2.53 billion and $222 million, respectively, in 2002. These results represent a decline from the corresponding 2001 figures of $2.94 billion and $322 million, a decrease attributable to a recessionary global economy and its impact on the world aviation industry (Multex Investor, 2003). In 2002, overseas markets accounted for approximately 97% of total sales, with domestic sales accounting for roughly 3%. By 1999, EMBRAER had become Brazil’s largest exporter, a position it lost 2002 when it became the 2nd largest seller of Brazilian goods in world markets. In 2003, EMBRAER had 12,227 employees worldwide with subsidiaries in the United States, Australia, France, China, and Singapore.  EMBRAER’s product range includes commercial, military and corporate aircraft, with commercial planes accounting for roughly 80% of the company’s total sales revenue.  EMBRAER 170: a 70-seat airplane, priced at $21 million. It competes with the Airbus A 318 and the Boeing 717. Its maiden flight took place in February 2002. This is the first of a larger 70-to-108-seat family of airplanes launched in 1999 at the Paris Le Bourget Air Show. Delivery of the EMBRAER 175, 195 and 190 models is slated to begin in July 2004, December 2004 and December 2005 respectively. This larger aircraft brings EMBRAER into direct competition with Boeing and Airbus, the two largest world manufacturers of airplanes. However, according to Mauricio Botelho, EMBRAER’s President and CEO, his company’s new larger aircraft models have a competitive advantage for regional routes because, unlike Boeing’s 717 and Airbus A 380 which are adaptations of even larger models, EMBRAER’s new family of jets were designed with shorter regional routes in mind and are lighter and have a better fuel consumption (Icaro Brasil, 2000).  The Gulf War of 1991 resulted in a worldwide recession that affected the entire aviation industry including EMBRAER. Financial problems ensued; the company showed a loss of $337 million in 1994 and was approaching bankruptcy. The Brazilian government began a search for a foreign aerospace company willing to take a stake in and help save EMBRAER; competing manufacturers of regional aircraft were barred from the process. Finally, in 1994, the Brazilian Holding and Investment Bank, Bozano Simonsen SA and two large Brazilian mutual fund companies, PREVI and SISTEL acquired 60% of EMBRAER’s voting shares. A period of major layoffs ensued in which the company shed about one third of its 5,500 employees and took additional measures to become leaner and more market oriented.  Besides the financial considerations, EMBRAER also intended to secure the participation of a key strategic partner that would help it expand its defense-related business and its international operations.  In the late 1990s, the company also needed additional financial resources to fund the development of its 170-195 new models within the next few years. European aviation and defense consortia showed interest and after a protracted search and negotiation process, talks seemed to culminate by October of 1999 when a consortium formed by British Aerospace and Swedish defense group Saab announced they would take a 20% equity participation in EMBRAER (BBC, 1999).

 

Investigating and Modelling GATS Impacts on the Developing Countries: Evidence from the Egyptian Banking Sector

Dr. Mansour Lotayif, University of Plymouth Business School, Plymouth, Devon, UK

Dr. Ahmed El-Ragal, Arab Academy for Science and Technology, Alexandria, Egypt

 

ABSTRACT

Previous research studies focused on investigating the impact of GATS (General Agreement for Trade in Services) on the developed countries, whilst few studies investigated these impacts on the developing countries. This paper will investigate the following three main objectives: 1- To identify the impact of GATS on the Egyptian banking sector. Chi-square Goodness of Fit will be used for the purpose of defining whether the GATS impact on the Egyptian banking sector is positive or not. 2- The paper will elaborate the GATS impact by exploring the dependency relationship between twenty-five GATS impacts and seven demographic variables, for which Correlation analysis will be utilized. 3- Causality relationships are to be examined throughout modelling the variables that affect the perception of the GATS impacts (i.e. dependent variables), Multiple Regression (MR) will be used in this context. Compiling evidence from the Egyptian banking sector revealed that the perception of each GATS impact is affected by five of the predictor variables. These five are bank type (local or foreign), position of respondent, respondent educational level, respondent experience, and bank experienceGoldin et al. (1993); Brandio and Will (1993); OECD (1993); GATT (1993); Nguven et al. (1991); Frohberg et al. (1990); Deardorff and Stern (1990); Burniaux et al. (1990); and Trela and whalley (1990) argued that the benefits of trade liberalization, as a result of Uruguay round, will range from $119 to $274 billion coupled with an increase in global trade from 12.4 % to 17 % by 2005. However, developed countries and huge goods’ traders such as US, EU, and Japan will be the most beneficiary areas (Greenaway and Milner, 1995) and both developing and less developed countries may pay the bill. Table (1) summarized the results of the previous work Interestingly and surprisingly, services liberalization is the economic sector that has potential gains for developing countries. It is estimated that, service sector in developing countries is the only sector that will achieve positive value added by 2005 (Thomas et al., 1996). The question that might be raised here is will the Egyptian banking sector be among those who benefit from trade liberalization in services?  Consequently, literature in this perspective is centered on two main points of views: pessimistic and optimistic. In other words, many scholars have discussed and explored GATS (General Agreement for Trade in Services) consequences in general, some of them were pessimistic (e.g. El-Mody, 1995; and Evans and Walsh, 1995) and the rest were optimistic  (e.g. Fox, 2001; Asch, 2001; Kono and Ludger, 1999; Demiguc-Kunt, and Huizinga, 1998; Mattoo, 1998; IMF, 1998; Kono et al., 1997; Claessens and Glaessner, 1997; Goldstein and Turner, 1996; and Greenaway and Milner, 1995). For instance, it has been argued that openness of financial markets could lead to raise efficiency, develop markets, increase financial market transparency, and attract new capital (Kono and Ludger, 1999; Demirguc-Kunt and Huizing, 1998; IMF, 1998; Kono et al., 1997; and Claessens and Glaessner, 1997). Therefore, Egypt has signed all the GATS agreements and it is a complete member in the WTO (World Trade Organization) since its creation in 1995 as the legal successor of GATT (General Agreement of Trade and Tariff). However, GATS may negatively affect Egyptian monetary and credit policies (El-Mody, 1995).  The fact of the matter is that both point of views ignite the willingness for exploring these two contradictory perceptions, and therefore justifies starting the current study by investigating GATS impacts on the Egyptian banking sector. Bearing in mind that this area of research is still in its beginning, an endeavor has been made to model the variables that affect the perception of these GATS impacts.  This paper aims at finding out the impacts of GATS on the Egyptian banking sector as well as determining the factors that affect their perception by exploring both dependency and causality relationships in this context. To achieve these aims, the following hypotheses have been proposed and tested:  H: “GATS agreements have greater positive than negative impacts on the Egyptian banking sector”. H: “There is a significant relationship between respondents’ positions and their perception of the aggregate GATS impacts”  H: “There is a significant relationship between respondents’ ages and their perception of the aggregate GATS impacts”  H:  “There is a significant relationship between respondents’ educational levels and their perception of the aggregate GATS impacts”  H: “There is a significant relationship between respondents’ experiences and their perception of the aggregate GATS impacts”  Discussing the adopted methodology includes determining research population, sample, response base, response rate, the used instrument and its reliability. Firstly, the research population is the Egyptian banking sector that includes 82 banks and is ruled by 800 boards of directors’ members, and served by 83179 employees and 20658 workers, as shown in Table (2). Secondly, the selected sample to represent this population equals 800-response base. According to Ali (1973) and Hill et al. (1962) the sample size is sufficient. This sample was divided into two main categories: local banks regardless of their ownership; and foreign branches, as shown in Table (3) bellow.  Thirdly, the response bases or respondents in this study were the members of the board of directors.

 

The Behaviors & the Statistical Properties of Emerging Markets’ Indices & Their Impact on Estimated Stock Beta:  The Case of ASE

Dr. Mahmoud A. Al-khalialeh, Kuwait University, Safat, Kuwait

 

ABSTRACT

This study examines the behaviors and the statistical properties of two alternative market indices introduced by Amman Stock Exchange (VWI & EWI), and their impact on estimated stock beta. The study predictions are examined by using 2206 observations of daily market index over a nine-year period (1992-2000). The study’s findings indicate that the mean of EWR is notably and significantly higher than the mean of VWR for most years and time intervals examined in this study. Additionally, findings suggest that the two market returns tend to get closer during bullish market conditions and diverge widely during bearish conditions. The variances of the two market returns vary significantly in the last four years. The implications for estimated beta are examined by estimating beta for a sample of 58 companies listed in ASE, using the two market indices and based on 246 daily index observations during 1998. Furthermore, the results indicate that the two market returns produce dissimilar betas; the market return with the larger variance (VWR) produces significantly lower beta than the market return with the lower variance (EWR)   The market index, which has been viewed as an indicator of the security market overall performance, is of considerable interest to professionals and researchers in accounting, finance and economics. A significant bulk of market based research in accounting and finance during the last three decades have been using the market index to estimate security beta and security returns beginning with Ball & Brown (1968). In applying the market model to estimate security beta researchers can chose a proxy for the market portfolio among alternative market indices. An issue which has been examined in well developed market is whether the choice of market index has impacts on estimated beta.  Prior studies have examined the impact of using alternative market indexes on estimated stock beta and on the stationarity of the market model (e.g., Roden, 1981; Saniga et al, 1981; Lee 1985; Elgers & Murray, 1982). For example, Roden’s (1981) study indicates that estimated betas vary according to the market index used. Elgers and Murray (1982)’s study indicates that beta estimates obtained by using value weighted indices (CRSP & SP), exceed those obtained by using the CRSP equally weighted index, and the stability of beta estimate over time is quite sensitive to the market index employed. However, Lee’s study (1985) reports a little difference among the alternative market indices employed in his study in term of their impact on estimated beta and on the stationarity of the market model. The results of most these studies indicate, in general, that different market indices could produce different betas. These results suggest that there are differences in the statistical properties of the alternative market indices which are capable of inducing differences in estimated betas. However, none of the aforementioned studies has examined the differences in the statistical properties of alternative market indices which may induce differences in estimated beta. A later study by Kim (1989) compares the CRSP two market indices (value weighted & equally weighted) and reports differences in the means and the variances of the two market indices. However, his study has not examined whether the reported differences in the means and the variances of the two alternative indices are significant.  All the aforementioned studies were conducted in well developed markets which usually have different characteristics from emerging markets. Compared to well-developed markets, emerging markets are characterized with different characteristics, such as thin trade, low liquidity, high volatility and high rate of concentration which may affect the characteristics of their market indexes and the impact of using alternative indices on estimated betas(1) . A prior study indicates that Amman Stock Exchange (ASE) is one of the most highly concentrated emerging markets, where the top ten companies listed in the market accounted for 66.2 % of the total capitalization of all companies listed in the exchange(2).  This high rate of concentration may induce distortion to the value weighted index and possibly leads to significant differences in the statistical properties of the two market indices (3). Therefore, reexamining the statistical properties of markets indices and their possible impacts on estimated betas in emerging markets may provide additional insights into these issues.  This study extends prior research by addressing two related issues collectively. Firstly, it examines the differences in the statistical properties of the two market indices of ASE (value weighted & equally weighted) and their related returns, using daily price index over nine-year period (1992-2000). Secondly, it examines empirically the impact of these differences on the distributions of betas estimated under the two alternative market indices. These issues should be of interest to researchers conducting capital market research in small emerging markets and employing the market index in their research design. 

Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2016. All Rights Reserved