The American Academy of Business Journal
Vol. 2 * Num.. 1 * March 2002
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Online Computer Library Center * OCLC: 805078765
National Library of Australia * NLA: 42709473
Peer-Reviewed Scholarly Journal
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All manuscripts should be professionally proofread / edited before submission. After the manuscript is edited, you must send us the certificate. You can use www.editavenue.com for professional proofreading/editing or other professional editing service etc... The manuscript should be checked through plagiarism detection software (for example, iThenticate/Turnitin / Academic Paradigms, LLC-Check for Plagiarism / Grammarly Plagiarism Checker) and send the certificate with the complete report.
The Journal is published two times a year, March and September. The e-mail: email@example.com; Journal: AABJ. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above.
Copyright 2000-2020 AABJ. All Rights Reserved
ISO9000: 2000 Quality Management Systems Standards: TQM Focus in the New Revision
Dr. C. P. Kartha, University of Michigan-Flint, MI
The international Standards Organization in 1987 developed a set of quality standards, the ISO9000 series, as a model for quality assurance standards in design, development, production, installation and service. The purpose behind the deployment of these standards was to simplify the global exchange of goods and services by developing a common set of quality standards. They provide a universal framework for quality assurance and quality management. The standards were revised in 1994. The revised version is referred to as ISO9000: 1994. A vast majority of organizations currently use this version. The most recent revision of these standards, ISO9000: 2000 was published in December 2000. The revision adopts a systems approach to quality management and includes TQM principles and procedures. This paper examines the major changes and improvements in the latest revision in relation to ISO9000: 1994 and some of the issues involved in the implementation of the new standards. Global competition and demand for better quality requirements by customers in recent years resulted in an emerging need for countries to develop guidelines and standards for identifying and addressing quality issues. The International Standards Organization in 1987 developed a set of quality standards known as ISO9000, subsequently updated and revised in 1994,as a model for quality assurance and quality management for organizations involved in design, development, production, installation and service. The purpose behind the deployment of ISO9000 was to simplify the international exchange of goods and services by requiring a common set of quality standards. The European Community nations adopted ISO9000 as the model for international standards for quality and required registration to these standards mandatory for doing business with other nations. In the mean time, the Big Three automobile manufacturers in the U.S. developed QS9000, an extended version of the ISO9000 standards, and required that all its suppliers obtain registration to these standards in order to qualify for new contracts as well as for renewing existing contracts. These developments generated immense interest in companies all over the world to actively begin the implementation of the various requirements in the standards to obtain certification. For most organizations, working towards certification was no longer an option but a necessity for survival in a competitive global market. The most recent revision of these standards, ISO9000: 2000, was published in December 2000. The revision was in response to widespread criticisms about several key aspects of the old standards. The new standards have a completely new structure based on the principles of Total Quality Management.
Dr. Thang N. Nguyen, Candle Corporation and California State University Long Beach, CA
Anthropomorphic and ecological views of e-business
From an anthropomorphic view, we consider e-business over the Internet as a giant living human body, hence living species. From an ecological view, we extend the concept of business ecosystem by Moore (1993, 1996) to particularly include software as living species within business ecosystems. As depicted in Figure 1, we establish a parallelism between “e-business” driven by software (as living species) and non-software factors (e.g. funding – not shown in Figure 1) and “the natural ecology” conditioned by multiple living species and abiotic factors (such as temperature – not shown in Figure 1). Together, the two views give rise to the concept of e-business as an automation continuum that ranges from bits (microscopic) - to digital species - to business ecosystems (macroscopic). We argue that this parallelism particularly helps define a framework for the investigation of Business-IT integration, structurally (anatomically), functionally (physiologically) and behaviorally. The general parallelism in Figure 1 suggests a biologically/ecologically-inspired framework to exploit bio-ecological organization, interaction and behavior of digital species/digital organisms (denote software and software products respectively) and business ecosystems. In this paper, we will restrict our attention to such a framework for the investigation of business-IT integration. At the microscopic level of this parallelism, bits may be considered as particles, primitive data types as atoms and complex data types as molecules. Object classes in the sense of object-orientation can then be considered as biological equivalence of cells. Constructor methods in object class are considered equivalent to DNA/RNA and basic class methods to organelles in cells. As proteins are composed of about 20 different amino acids, we suggest that amino acids are the equivalence of programming constructs such as if-statement, for-statement, while-statement, do-while-statement or other constructs built upon them. We postulate that self-contained algorithms (or general class methods of any class) play a role similar to that of proteins or carbohydrates. Software components (such as COM/DCOM) then may be biologically equivalent to tissues, applications to organs, application systems to organ systems, software products to living organisms, and software to living species. At the macroscopic level, software product family are considered as population, e-business as community, and e-business ecosystems (in the sense of Moore, 1993) as natural ecosystems.
A Look Over the Concepts of Work and Leisure Throughout Important Historical Periods
Dr. Esin Can Mutlu, Yildiz Technical University, Istanbul, Turkey
Ozen Asik, Yildiz Technical University, Istanbul, Turkey
The present study attempts at an in-depth investigation of work, and another related concept leisure. Work and leisure are defined as two counterparts, and their co-existence is examined throughout historical periods. Leisure in Ancient Greece is an ultimate activity, and consists of cultivation of the soul through the arts. Work is seen as a degrading activity until Protestantism rises and praises it as a way to reach the God. Especially after industrialization, it is seen that work gains superiority over leisure. That is, leisure is seen as complementary to work, and just as work, it is influenced by aspects of industrialization as well. Current aspects involved in work and leisure enhance the same meaning and existence, leading though to more leisure in terms of availability of time thanks to technology. In this paper, the concepts of work and leisure are defined and reviewed along important periods in history. Starting from the hunter-gatherer societies, the concepts are elaborated in the agrarian and industrial societies, followed by a conclusion regarding the current trends in work and leisure. Work as a general term is defined as an “exertion directed to produce or to accomplish something; labor, toil; productive or operative activity; activity undertaken in return for payment; employment, job.” Some criteria are cited in an article by Sylvia Shimmin, pertaining to the definition of work (Hopson & Hayes, 1968). These criteria suggest that: a. work is purposeful activity; b. work is instrumental; c. work yields income; d. work entails expenditure of effort; e. work involves some element of obligation and/or constraint. This last point may also be found in the work of Raymond Firth, who claims that “work is purposive, income-producing activity, entailing the expenditure of energy at some sacrifice of pleasure” (Bryant, 1972). These suggest that work is usually compulsory, and that its performance does not always give pleasure to the performer, and that another activity might be preferred to work. Leisure on the other hand is defined as an “opportunity or time afforded by freedom from immediate occupation or duty; free or unoccupied time; ease.” It is also considered as “free time after the practical necessities of life have been attended to.” These two definitions stress the time dimension involved in leisure. Another definition states that leisure is the exertion of a preferred activity that provides diversion and pleasure, instead of the everyday routine activities which are usually carried out in a sense of social constraint and obligation. In this definition the activity dimension of leisure is more apparent. Still, it is obvious that when defining leisure, our conception of it is generally bound to work. That is to say, work (in modern society) is considered as the primary activity and hence leisure is defined accordingly, with a secondary emphasis.
Dr. Lieh-Ching Chang, Shih Hsin University, Taiwan R.O.C
Tacit Knowledge Management in Organizations: A Move Towards Strategic Internal Communications Systems
Dr. Probir Roy, University of Missouri Kansas City, Kansas City, MO
Preeta M. Roy, The Wharton School, University of Pennsylvania, Philadelphia, PA
To date, knowledge management systems have focused on explicit knowledge. In this paper we explore the need for the incorporation of tacit knowledge into strategic communication systems. In order for these strategic internal communications systems to become effective, Internet Protocol (IP) Technology is a necessary ingredient. IP Technology, through the use of Virtual Private Networks and Streaming Media, will permit organizations to achieve the three key components of tacit knowledge based strategic internal communications systems, viz. discovery, dissemination and collaboration. Literature in Knowledge Management (KM) concurs that knowledge within an organization falls into two categories – explicit knowledge and tacit knowledge (Markus, 2001). Explicit knowledge is relatively easy to code and very external in nature. Thus, most organizations have concentrated their knowledge management efforts on developing effective links between the management of explicit knowledge and external communications systems. Tacit knowledge, on the other hand, is relatively harder to code and extract, and is very internal in nature. Not only does tacit knowledge need to be discovered, extracted, and captured, it has to be creatively disseminated so that this shared knowledge can be efficiently used to extend the knowledge management base. Perhaps, tacit knowledge is the more important component of knowledge management, in so far as the collaboration that it encourages leads to quantum shifts in knowledge rather than the incremental linear enhancements that are typically associated with explicit knowledge management. Prima facie, it appears that tacit knowledge extraction, dissemination, and collaboration would be difficult to effect. However, with the tremendous developments in communications technologies, especially Internet Protocol (IP) technologies, there is a technological push that is leading to rapid advances in strategic internal communications systems that harness tacit knowledge. Strategic internal communications systems are intended for use within an organization, with a very specific target audience, usually employees. These systems contain very specific messages, but allow for wide-ranging and multi-faceted forms of collaboration.
Dividend Omission Announcement Effects, the Firm’s Information Environment, Earnings Volatility and Q
Dr. Devashis Mitra, University of New Brunswick, Canada
Dr. Muhammad Rashid, University of New Brunswick, Canada
For a sample of dividend omitting firms’ stocks, the average returns variance increases significantly on day -3 (where the Wall Street Journal Index announcement date is day 0) relative to a prior estimation period, price spreads show significantly increased levels on days -1 and 0, and average Cumulative Abnormal Returns (CAR) are consistently negative between days -4 and 0 and significantly negative on day -1. These results suggest some anticipatory market uncertainty during the period immediately before the dividend omission announcement. The percentage increase in average returns variance for days -3 and -2 relative to the estimation period average is inversely associated with the firm’s approximate q measure and the dividend change yield from the previous quarter and positively associated with the percentage of institutional equity holding in the firm as well as with firm-specific earnings volatility, measured by the standard deviation of earnings per share. On average, the returns variance, percentage spread and earnings volatility increase over a 365 day post-announcement period relative to 90 day pre-announcement levels. These results suggest heightened uncertainty for the aftermath of dividend omission. Also, on average, the number of institutions holding the firm's equity decline substantially one year after the announcement. This suggest less monitoring of these firms and enhanced informational asymmetry in the aftermath of the dividend omission announcement. The market’s risk perception subsequent to the dividend omission appears to increase more for firms with high historical earnings volatility and lower approximate q values. This study seeks to add to the empirical literature on the “information content“ of dividends by examining the effect of first time dividend omission announcements for a sample of NYSE-listed firms on their stock price characteristics. We find that the average returns variance and percentage high-low spread of stock prices increase during an announcement or event period relative to an estimation period. Analysis over a seven-day “event period” shows significantly increased returns variance levels on day -3 and significantly increased average daily price spreads on days -1 and 0, relative to the Wall Street Journal Index announcement date 0. The average Cumulative Abnormal Returns (CAR) are consistently negative between days -4 and 0 and significantly negative on day -1. The news of the dividend omission is, often, announced on day -1 and reported the next day. Some of these announcements may be made during trading hours while others may be transmitted after trading hours.
Dr. K. Shelette Stewart, Nova. Southeastern University, Ft. Lauderdale, FL
This study examined the extent to which small businesses, with an international focus, are employing formal business planning techniques and the extent to which such techniques contribute to small business success. The study was based on the hypothesis that small business success is associated with formal business planning. Indicators of both formal business planning, the independent variable, and small business success, the dependent variable, were developed. Survey research was conducted to generate and analyze data gathered from 100 owners/operators of small businesses, with an international focus, located within the Atlanta Metropolitan Statistical Area (MSA). A five-page questionnaire was developed and a survey analysis grid was designed. The researcher found that those businesses practicing formal business planning techniques were more successful than those not employing them. The conclusions drawn from these findings suggest that formal business planning contributes to the success of small businesses with an international focus. Innovation through entrepreneurship and small business development has proven to be the foundation upon which the pillars of American economic growth stand. A myriad of important innovations may be traced back to small business owners and operators. Small businesses constitute a critical component of the United States economy. They provide approximately 80 percent of all new jobs, employ 53 percent of private sector employees and, represent over 99 percent of all employers. Given the prevalence of the Internet, many small businesses are expanding their markets and becoming more international in scope. However, according to the U.S. Small Business Administration, approximately half of all new small businesses fail within the first five years of operation. The agency reports that over half a million small businesses closed and/or filed for bankruptcy during the year 2000. A common adage suggests that individuals do not plan to fail; they simply fail to plan. This may also be aptly applied to the small business arena as numerous challenges, such as limited resources, financial instability, unplanned expansion, inadequate management, and competition, have been identified as contributing factors to small business failures or closures. Nevertheless, most of these issues may be effectively addressed by one critical endeavor: formal business planning. Most of the literature pertaining to the topic of small business planning is generally more prescriptive than descriptive.
Dr. Abdulla M. Alhemoud, University of Qatar, Qatar
Dr. Tamama H. Abdullah, Ministry of Public Works, Kuwait
Water is rather a scarce commodity in Kuwait. With rapid growth of population coupled with increasing urbanization and agriculture, the demand for water in Kuwait is continually on the increase. The main water source in the country is from desalination and small quantities from underground aquifers. Wastewater effluent could be a valiable source to augment this dwindling water supply, at least the irrigation supplies, and should not continue to be wasted. Reuse of wastewater effluent has the potential to both minimize the disposal of water to the environment and reduce the demand on fresh water supplies. This paper is an actual case study discussing in details the features of reuse, the processes used and standards adapted. Design data, operational results, and physical characiteristics for the three wastewater treatment plants (Ardiya, Jahra, and Riqqa) in Kuwait will be discussed. In addition, the paper reports the results of a research study undertaken to determine the willingness and level of awareness and knowledge among the people of Kuwait in using wastewater effluent for different purposes. Finally, cost and benefit analysis were conducted on the wastewater effluent and reuse. The study concludes with useful recommendations to both the authorities and the citizens of Kuwait. In the face of population growth and increasing demand for water, rapid growth of agriculture, increasing environmental degradation and impending climate change, in arid countries like Kuwait, more effort is required to assess water resources for national planning and management. Therefore, it is imperative to look into the feasibility of reusing municipal wastewater in different categories of reuse. The main sources of water in Kuwait and the other Gulf Cooperation Council (GCC) states are either from desalination of seawater or from groundwater aquifers. In fact the Arabian Gulf serves as the only source and sink of water for Kuwait. Over half of the global desalination capacity is deployed in the GCC states. GCC states installed capacity is currently in excess of five gega liters per day (5 glpd) (Al-Zubari 1998, Al-Sofi 1994). In the Gulf, the route of evaporation is greater than replenishment with river water. The practice of indiscriminate discharge of untreated or partially treated municipal and industrial wastewater contributes to increasing the pollution load into the Gulf. These problems are compounded by the fact the water-intake points for major desalination plants are located near the wastewater discharge sites. The ratio of water distribution and water proportion as distributed among the three sectors namely domestic water, industrial and agriculture in the GCC states is illustrated in Table 1. Covering an area of 17,818 km2, Kuwait lies in the north western corner of the Arabian Gulf.
Cultural Understanding and Consumer Behavior: A Case Study of Southern American Perception of Indian Food
Raymond Bailey, Erskine College, SC
Dr. Robert Guang Tian, Erskine College, SC
Cohesion Among Culturally Heterogeneous Groups
Dr. Norman Wright, Brigham Young University-Hawaii, Laie, HI
Glyn Drewery, Brigham Young University-Hawaii, Laie, HI
This paper examines the role of culture in explaining differences in self-reported evaluations of team cohesiveness in culturally diverse teams. In earlier research, Thomas et al. (1994) found that teams composed of culturally diverse members experienced less cohesiveness than did those in culturally homogenous teams. Such findings make sense in light of similarity theory, which suggests that humans feel a greater attraction to those who are most similar to themselves (Nahemow and Lawton, 1983). One might also profitably compare the cohesion of teams from various cultures. In this study, however, the authors examine the role of nationality on perceptions of cohesion within a mixed-culture team framework. Hypotheses are formed based on the conflict resolution style of each culture represented. The results indicate a small but significant relationship between the nationality of the respondents and the degree of cohesion attributed to their mixed culture teams. Asians report the least cohesion followed by Anglos while Polynesians indicate the highest levels. Increasingly business activities involve team members from multiple nationalities and cultures. While alliances between culturally diverse firms often make strategic sense, managers frequently underestimate the challenge of combining employees with different attitudes, beliefs, and work values. As an executive of a large European firm lamented, “we have had strategic plans suffer and careers derail because of complications arising from multinational groups” (Hambrick et al, 1998). In an effort to better understand these challenges, this paper examines the role of national culture in explaining differences in self-reported evaluations of team cohesiveness in culturally diverse teams. Throughout years of study, cohesion has arguably been the most important outcome variable among small groups (Carron and Brawley, 2000). Staw et al. (1981) defined cohesion as attraction to members in one's group. It can further be defined as a collectivist type of togetherness that exists between team members when team needs transcend individual differences and desires. Cohesiveness arises in groups for two reasons (Tziner, 1982).
A New Method for Teaching the Time Value of Money
Dr. Terrance Jalbert, University of Hawaii at Hilo, Hawaii
Students frequently experience difficulty in identifying the appropriate time value of money (TVM) technique to apply to a problem. This paper surveys the TVM presentation in seven popular introductory finance textbooks. A new presentation technique is then developed. The presentation technique is based on a simple method for identifying the appropriate TVM technique to apply to any problem. TVM techniques conducive to applying the calculations in a generalized setting are then presented. Visual aids are provided to assist students in selecting correct techniques. By using these techniques students are able to more easily identify appropriate TVM techniques. Many techniques have been developed for presenting the time value of money (TVM). Despite this considerable effort on the part of instructors, students frequently experience difficulty identifying the appropriate technique to apply to a specific problem (Eddy and Swanson, 1996). However, it is well known that a pedagogy, which works well with one audience, does not necessarily work well with another (Bloom, 1956). Thus, the development of new and different techniques that appeal to various audiences is beneficial. This paper develops a new technique for teaching the TVM. The technique is specifically intended to appeal to students that benefit from precise definitions and visual aids. The technique affords instructors a new tool in their arsenal to teach students TVM concepts. The paper begins by surveying how seven popular introductory finance textbooks address the TVM issue. Next, a new approach for teaching the TVM is presented. The approach begins with a simple method for distinguishing between a single sum of money, annuity, perpetuity, growing perpetuity and uneven cash flow stream. Cash flows are distinguished by examining conditions that must be met in order for a series of cash flows to qualify for each classification. TVM techniques conducive to applying the calculations in a generalized setting are then presented. Finally visual aids are provided to walk students through selecting the appropriate TVM technique for a problem. Students nearly unanimously experience difficulty in identifying the appropriate technique to apply to TVM problems. While the TVM issue is complex, some of the difficulty can be attributed to the approach that finance texts take to the issue. This contention is confirmed by Eddy and Swanson who argue that instructors do not sufficiently develop a frame of reference which begins with simple learning objectives focused on individual topics and progresses to higher levels of understanding (Eddy and Swanson, 1996). This section contains a survey of the approaches used in seven popular finance texts to present TVM concepts.
Taking Note of the New Gender Earnings Gap: A Study of the 1990s Economic Expansion in the U.S. Labor Market
This article examines the impact of economic expansion on the gender earnings gap in the U.S. labor market during the 1990s. Using data from 1994 to 2001 Current Population Survey, this research employs the Blinder-Oaxaca decomposition method extended by Cotton along with the correction for selectivity bias. The results show that the gender earnings gap has widened from 1994 to 2001. The pattern of gender earnings gap described by the results of the decomposition analysis, overall and across three broadly defined occupational categories, is extremely consistent, indicating that women were adversely affected during the economic expansion in the 1990s. The result of a slightly widened gender earnings gap casts doubts on the widely held optimistic expectation of narrowing of the gap developed over the past several decades. In the future, labor policy should focus on changing labor market structure so that females will be treated equally with males to narrow the gender earnings gap. During the past several decades, considerable attention in the academic arena has been focused on the analysis of women’s labor market position. Earnings are not only a major determinant of worker’s economic welfare, but also are a significant factor in a multitude of decisions, ranging from labor supply to marriage and even to fertility (Blau and Kahn, 1999). For about 20 years after World War II, the ratio of women’s to men’s earnings remained at approximately 60 percent. Since 1976, however, the gender gap in annual earnings on average declined by about 1 percent per year (O’Neill and Polachek, 1993). Also, during the years of 1978 to 1999, the weekly earnings of women full-time workers increased from 61 percent to 76.5 percent of men’s earnings. However, the narrowing earnings gap failed to decline further after the mid-1990s (Blau and Kahn, 2000), pushing researchers to scramble for possible explanations. Despite the intense scrutiny of the gender earnings gap, only a handful of researchers have attempted to examine its trends, especially in recent years. This reason fills in this research gap by employing Current Population Survey (CPS) to estimate gender earnings gap from 1994 to 2001.
Zen of Learning: Folkways Through Wisdom Traditions
Dr. Satinder K. Dhiman, Woodbury University, Burbank, CA
This paper discusses ten folk laws of learning. These laws are in the nature of musings about learning. The purpose of each law is to clear some psychological barrier to learning. These laws invite the learner to examine his or her assumptions and expectations about learning. These laws tell us that personal likes or dislikes may make for comfort but not for learning. No true learning can take place unless the learner is willing to undergo a shift of mind and to challenge his or her engrained habits of thought. These laws also bring out the importance of patience, humility, and sharing in the context of learning. To underscore the message, this paper draws heavily on quotes, anecdotes, and stories culled from the wisdom traditions of Taoism, Sufism, and Zen. This author has used these laws in his management-related classes, both at the undergraduate and graduate level. This methodology has helped this writer in orienting the students in the art and science of learning. Besides, it clarifies several misconceptions about leaning during the early stages of the course. It is also in harmony with the growing literature on the concept of “Learning Organization” inspired by such management authors as Peter Senge and Max Depree. The following folk principles are in the nature of musings about learning. It is not the intention of this writer to present another "theory" of learning. No originality is intended or implied other than the presentation and rearrangement of the material. Most of these insights are based on author's long-time study of the wisdom traditions of Sufism and Zen. Several anecdotes and stories have been used to illustrate the underlying theme. To facilitate better comprehension and assimilation of information, this writer has occasionally used appropriate teaching stories during class discussion. It is indicated to the students that these stories are not ends in themselves but means to an end, the end being better understanding of the material presented. In addition, these stories, owing to their symbolic value, serve as ideal developmental tools of learning. If eighty percent of the job is showing up, to quote Woody Allen, then it seems that the rest twenty percent depends upon paying attention.
Computer Crimes: How can You Protect Your Computerised Accounting Information System?
Dr. Ahmad A. Abu-Musa, KFUPM, Saudi Arabia
Computer crime is almost inevitable in any organization unless adequate protections are put in place. Computer crime has no longer become a local problem and security solutions cannot be viewed only from a national perspective, they have expanded from relatively limited geographical boundaries to become worldwide issues. Therefore, protecting computerised accounting information systems (CAIS) against prospective security threats has become a very important issue. The main objectives of this paper are to investigate the significant security threats challenging the CAIS in the Egyptian banking industry and the prospective security controls that are actually implemented to prevent and detect security breaches. A self-administered questionnaire was used to survey the opinions of the heads of internal audit Departments (HoIAD) and the heads of computer departments (HoCD) in the Egyptian banking industry regarding the following CAIS security issues in their banks: The characteristics of CAIS in the Egyptian banking industry; The significant perceived security threats to the CAIS in the Egyptian banking industry; The prospective security controls implemented to eliminate or reduce security threats in the Egyptian banking industry. The entire population (sixty-six banks’ headquarters) of the Egyptian banking industry has been surveyed in this research. Seventy-nine completed and usable questionnaires were collected from forty-six different banks’ headquarters. Forty- six of these questionnaires were completed by the heads of computer departments, and thirty-three questionnaires were filled by the heads of internal audit departments. The response rate of the computers departments (after excluding merged, liquidated, too remote and non computerised banks) was 79.3%, whilst the response rate was 56.9% for internal audit departments. The paper proceeds to discuss the main CAIS security threats and the adequacy of implemented security controls in the Egyptian banking industry. The significant difference between the two respondent groups as well as among bank types regarding the main security threats and implemented security countermeasures are investigated. Inadequate security controls have been discovered and some suggestions to strengthen weak points of the security controls in the Egyptian banking industry are proposed. Information is a valuable corporate asset, which should be protected with care and concern because business continuity and success are heavily dependent upon the integrity and continued availability of critical information.
Marketing on the Net: A Critical Review
Dr. S. Altan Erdem, University of Huston-Clear Lake, Houston, TX
Dr. Richard L. Utecht, University of Texas at San Antonio, San Antonio, TX
While e-commerce has been having an incredible role in marketing over the recent years, there have been some concerns about its potential “not-so-positive” effects on certain business settings. Many believe that the some aspects of e-commerce require changes on some of the basics of marketing. The purpose of this paper is to review some of these peculiarities of e-commerce and examine if they are likely to result in any changes in traditional marketing practices. It is hoped that the review of this paper will provide the marketing practitioners with added incentives to explore the e-commerce ventures further and develop practical insights to better the use of the net in their marketing functions. The impact of e-commerce on marketing distribution channels is far reaching and cannot be underestimated. Technological and market forces will determine the extent to which consumers can gain access to the information they desire (Alba et al. 1997). The tremendous growth in Internet use has led to a critical mass of consumers and firms participating in the global on-line market place. In the context of consumer sales, e-commerce businesses must embrace a strategy that seeks to serve the distribution requirements of all consumer market segments. As people become more comfortable with the web, traditional businesses have to find new ways of marketing to their customers in web environments. As they move to incorporate direct Internet sales into established distribution channels, they will face a daunting task. Businesses may face stiff opposition, both from within the organization and from established channel partners. This paper seeks to examine the explosive impact of e-commerce activity. It reviews the main effects of the net on marketing channels. The purpose is to examine issues such as the globalization impact caused by the Internet, the implications of the ever-increasing home shopping market, the issues facing consumers and businesses that utilize the Internet, and finally, the opportunities and challenges associated with marketing on the net. Technology is rapidly advancing every day, even as we speak. One of the greatest and most important advancements has been the World Wide Web.
Using Cost-Benefit Analysis for Evaluating Decision Models in Operational Research
Dr. Awni Zebda, Texas A&M University-Corpus Christi, Texas
Operation researchers and management scientists have recommended that the use of decision models should be subject to cost-benefit analysis. This paper provides insight into the cost-benefit analysis and its shortcomings as a tool for evaluating decision models. The paper also identifies and discusses the limitations of alternative evaluation methods. Understanding the limitations of cost-benefit analysis and the other evaluation methods is essential for their effective use in evaluating decision models. Over the years, management scientists and operational researchers have proposed quantitative and mathematical models to aid decision making in business organization. Decision models for problems such as capital budgeting, cash management, manpower planning, profit planning, and inventory planning and control represent an integral part of management science/operational research literature as well as the literature of the functional areas of management such as accounting, finance, marketing, personnel management, and production and inventory management. The development and use of decision models can be costly. Thus, establishing the value of these models is a necessary prerequisite for their use by practicing decision makers (e.g., Finlay and Wilson , Hill and Blyton ). According to Gass [1983, p. 605], "the inability of the analyst [and researcher] to demonstrate to potential users ... that a model and its results have ... credibility [and value]" is one of the primary reasons that models are not widely used in practice. In spite of its importance, the question of model value has not received much attention in management science/operations research literature (e.g., Finlay and Wilson , Gass , Miser ). The purpose of this paper, therefore, is to provide insight into the most widely recommended methods for evaluating decision models with special emphasis being placed upon cost-benefit analysis. Increased insight into cost-benefit analysis and other evaluation methods should benefit decision researchers, analysts, and practicing decision makers who are involved in the development and selection of decision models. The paper is organized around the following three questions.
An Empirical Note on the Impact of the Price of Imported Crude Oil on Inflation in the United Kingdom
Dr. Richard J. Cebula, Armstong Atlantic State University, Savannah, GA
Dr. Richard D. McGrath, Armstong Atlantic State University, Savannah, GA
Dr. Yassaman Saadatmand, Armstong Atlantic State University, Savannah, GA
Dr. Michael Toma, Armstong Atlantic State University, Savannah, GA
This study empirically investigates whether the assumption by the Bank of England that rising prices on imported crude oil lead to domestic inflation in the United Kingdom has had validity. In a model where real GDP growth and money stock growth are both all allowed for, empirical estimation reveals compelling evidence for the validity of this assumption. In particular, the greater the percentage increase in imported crude oil prices, the greater the domestic inflation rate. In addition, oil price shocks involving imported crude oil price hikes of 40 percent or more in a given year further elevate the domestic inflation rate. During the last three decades, it has been commonplace among public policymakers as well as consumers to assume that rising prices on imported crude-oil act to increase domestic inflation; clearly, this constitutes a form of the so-called “imported inflation hypothesis” (i-i hypothesis). This assumption may have been predicated to some extent on the experience of the 1970s, wherein sharply rising crude oil prices imposed by O.P.E.C. nations were believed in so many nations to have systematically exacerbated domestic inflation. For the case of the U.S. and the other G7 nations, at least one study [Cebula and Frewer (1980)] found strong empirical support for the i-i hypothesis. For the 1955-1979 period, Cebula and Frewer (1980) find rising prices on imported crude oil to lead to increased domestic inflation in all of the G7 nations. More recently, Cebula (2000) provides similar findings for the U.S. for the more current period of 1965-1999. However, whereas there has been only a limited formal analysis of the i-i hypothesis as it involves crude oil prices for the U.S., even less such formal analysis has been performed for the other industrialized nations. Indeed, the Cebula and Frewer (1980) study is over two decades old. Given the resilience of the acceptance among policymakers in industrialized nations of the i-i hypothesis as it relates to the price of imported crude oil, it may be useful to provide a formal updated investigation of the hypothesis for industrialized nations other the U.S.
Application of Taguchi Methods for Process Improvement for Tubular Exhaust Manifolds
Dr. C. P. Kartha, University of Michigan-Flint, Michigan
Taguchi Methods refer to quality improvement activities at the product and the process development stages in the product development cycle. It is based on the realization that significant improvement in quality can be achieved by engineering quality into a product at the front end of the product cycle, which is the design stage rather than at the manufacturing stage. This paper discusses the theoretical and the practical aspects of Taguchi Methods. An application of Taguchi Methods to optimize a production process is also discussed in the paper. The process involves production of an automotive exhaust manifold which had a quality problem involving excessive weld in a port opening that restricted passage of the required gauge and prompted a hand grinding operation. Through a Taguchi experiment the problem was successfully solved eliminating the tedious hand grinding process. The improved process also resulted in significant cost reduction and increased efficiency. Taguchi Methods, also known as Quality Engineering Methods, refer to quality improvement activities at the product and the process design stages in the product development cycle. The traditional quality control methods are designed to reduce variation during the manufacturing stage. The emphasis has been on tightly controlling manufacturing processes to conform to a set of specifications. Taguchi Methods is based on the realization that significant improvement in quality can be achieved by engineering quality into a product at the front end of the product cycle, which is the design stage rather than the manufacturing stage. By this method, variables that affect product quality are analyzed systematically to determine the optimum combination of process variables that reduces performance variation while keeping the process average close to its target. An important element of this method is the extensive and innovative use of statistically designed experiments. This method has gained immense popularity in the United States in recent years. Though Taguchi Method has been used successfully in Japan since the sixties, it was not used in the U.S. until early eighties.
Comparative Assessment of the Resume and the Personal Strategic Plan: Perspectives of Undergraduate Business Students, Human Resource Professionals and Business Executives
Dr. Lee R. Duffus, Florida Gulf Coast University, Florida
This research sets out to assess the perceptions of the relative efficacy of the resume and the Personal Strategic Plan (PSP) as vehicles for employment prescreening, career development, and job advancement. The target groups were undergraduate business students, human resource professionals and business executives. The results indicate that respondents perceive the resume as adequate for employment prescreening situations. However, compared to the PSP, the traditional resume is perceived as less effective in communicating nuanced information on individual characteristics that will position and advance current employees along the career ladder toward attainment of their career objectives. The study concludes that human resource specialists should emphasize increased usage of the PSP among current managerial employees instead of the traditional resume for situations involving career development or job advancement. This will improve both the efficiency of the prescreening process, and enhance the likelihood of employment decisions that are congruent with the strategic human resource needs of the organization and the career objectives of the employee. In response to the observation by some researchers and human resource professionals that the format and content of the traditional resume limits its effectiveness as a presentation format for personal, performance and career information (O’Sullivan 2002, Otte and Kahnweiler 1995), several authors suggest the need to develop a) a career plan (O’Sullivan 2002, Portanova 1995, Otte and Kahnweiler 1995), b) a personal business plan (Stokes Jr. 1997), and c) a personal development plan (Higson and Wilson 1995, Bullock and Jamieson 1995, Barrier 1994). Unfortunately, none adequately emphasize the marketing process, nor are they sufficiently broad-based to be effective in employment prescreening or other tasks involving employee development and advancement.
Course Content on Managed Care: The Graduate Program in Health Services Administration at Florida International University
Dr. Kristina L. Guo, Florida International University, North Miami, FL
Education on managed care is essential to student career advancement and organizational survival. To ensure that students are adequately prepared to face and manage in the evolving managed care environment, this study discusses the degree of coverage on managed care concepts in the curriculum of the Graduate Program in Health Services Administration at Florida International University. Using the 3rd Year Progress Report to the Accrediting Commission on Education in Health Services Administration and courses syllabi, the findings indicate that of the 17 graduate courses in HSA, 14 courses offer a wide range of managed care content. Through an interdisciplinary approach and continuous curriculum improvement, faculty emphasize upon critical skills and knowledge which enable students to analyze and respond to managed care challenges in actual health care practice. The evolving complexity of the health care system has led to the increased use of managed care to contain health care costs, improve access to care and deliver healthcare more efficiently (Shortell et al. 1996; Knight 1998; Kongstvedt 1997, 2001; Wenzel 1998). While managed care has rapidly become the primary delivery system for health services, it also creates numerous challenges for health care professionals. One of the main problems is adequately preparing health care professionals to understand the nuances of managed care given the continuous systemic, environmental, political, economic and organizational changes (Brown and Brown 1995; Ziegenfuss, Jr and Weitekamp 1996). Making sense of the alphabet soup of managed care (HMOs, PPOs, POS, etc) is difficult and often tricky. Health care professionals find themselves working among constant intricacies and ambiguities. To gain a solid foundation in managing the structure, finance and delivery is essential to career advancement and ultimate survival of organizations. At Florida International University, the curriculum of the Health Services Administration Program provides the opportunity for students, who are currently working in the health care field as administrators and clinicians and for students striving for administrative positions, to expand their knowledge and be better prepared to work in health care settings involving various aspects of managed care. This article describes the Graduate Program in Health Services Administration at Florida International University which awards the Master’s degree in HSA (MHSA). Specifically this article outlines the course content on managed care and its integration throughout the curriculum. Managed care is dominating the healthcare industry. The enrollment in Health maintenance organizations (HMOs) reached 81.3 million as of January 1999 (Fox 2001). Managed care has taken root and thrives in many forms.
Global Economic Scenarios: In the Twenty-first Century for the Future of World Economic Development–The Allen Hammond Scenarios Projections for the Future
Dr. Richard G. Trotter, University of Baltimore, Baltimore, MD
This paper is an examination of world capitalism and economic and social development in terms of Allen Hammond’s book, Which World? Scenarios for the 21st Century (1) which sets forth three scenarios for world economic and social development for the next fifty years. Hammond, in his book, develops three scenarios for world economic development; (1) Market World, (2) Fortress World, (3)Transformed World. This paper looks at these three scenarios in terms of world economic trends as well as the value systems of the societies in question. Additionally, global trends will be looked at in terms of such prescriptions for economic performance as privatization, free trade, and equitable distribution of income. The major areas of the world will be examined in terms of how efficiently these strategies and prescriptions for economic success and development have been implemented and how successful they have been. With the fall of Communism in 1989 and the spread of market capitalism throughout the world in the 1990s, it was assumed by many practitioners and scholars that world capitalism was the key to human hope, prosperity and development. In the first decade of the twenty-first century the optimism of the last decade of the Twentieth Century is being reexamined in terms of the realities of market capitalism as a panacea of human development. Additionally, societies are questioning in terms of their own particular values what kind of economic and social system they want and the trade off that the society is willing to make with respect to the benefits and burdens of market capitalism. The major areas of the world will be examined in terms of how effectively those strategies are prescriptions for economic success and how successfully they have been implemented. The most economically developed regions of the world, the United States, Europe, and Japan, while all using capitalism as the economic system of choice, have taken different roads. According to Martin C. Schnitzer, three varieties of capitalism have evolved: (1) Individualistic capitalism as practiced in the United States emphasizes (a) individualism; (b) short-term profit maximization; and (c) large income differentials. (2) Communitarian capitalism, also referred to as social market capitalism, is the major form of capitalism, as represented in Germany and Western Europe. This system has elaborate social welfare programs, less income inequity and an expanded state role. (3) In state-directed capitalism, as it exists in Japan and other East Asian countries, there is a closer relationship between government and business.
Security of Computerized Accounting Information Systems: An Integrated Evaluation Approach
Dr. Ahmad A. Abu-Musa, Department of Accounting & MIS, KFUPM, Saudi Arabia
Evaluating the security of CAIS is not an easy task. Reviewing the available literature in this area reveals a lot of confusion and inconsistencies since research in evaluating the security of information system is considered to be in its infancy. In this paper, evidence regarding the need as well as the importance of evaluating the security of CAIS has been covered. The alternative approaches for evaluating CAIS security used and implemented in previous literature are presented. Moreover, the requirements for selecting appropriate security countermeasures as well as implementing an effective evaluation security technique have been discussed. In this paper, the need as well as the importance of evaluating the security of CAIS will be outlined. The different alternative approaches for evaluating the security of CAIS will be considered and the main requirements for implementing an information security tool will be presented. In addition, the limitations and problems concerned with information security evaluation methods will be mentioned. Goodhue et al. (1991) have mentioned that there are numerous methodological questions regarding how to clearly measure security concern. Although many of the previous studies have employed user perception as an empirical measure, such measures may lack theoretical clarity, because they lack a theoretical underpinning. The most commonly cited reference discipline for these measures has been job satisfaction research. However, “IS satisfaction” has not been well enough defined to clarify how it is similar to or different from “job satisfaction” (p. 15). Risk analysis of the information technology environment represents another approach for evaluating information security. A literature review by Eloff et al. (1993) indicated that inconsistent terminology had been used in previous studies. These differences in terminology gave rise to the need for a standard set of terms to be used for the comparison of various risk analysis methods. As Kumar (1990) points out, evaluation in general serves to: Verify that the system met requirements;Provide feedback to development personnel;Justify the adoption, continuation or termination of a project;Clarify and set priorities for needed modifications; and Transfer responsibilities from developers to users (from, Conrath et al., 1993, p. 267).
Security of Computerized Accounting Information Systems: A Theoretical Framework
Dr. Ahmad A. Abu-Musa, Department of Accounting &MIS, KFUPM, Saudi Arabia
It has been claimed that “security of computerized accounting information system (CAIS)” is ill-defined term in the accounting literature. The current paper is conducted in response to numerous calls for research, that have emphasized the necessity of conducting theoretical research to enhance the body of knowledge concerned with CAIS security. The paper addresses the concept of CAIS security and its main components in an attempt to clarify confusion in that area. Through theoretical conceptualization of information and systems security an integrated theoretical framework of CAIS security has been introduced. In this paper, the concept and the meaning of CAIS security will be presented. The importance of the issue of CAIS security as a significant element for an organization’s success and survival will be discussed. The security objectives of CAIS and its main components will be highlighted; and finally, an integrated approach to CAIS security will be presented. Security is an ill-defined term in the technical literature. It has been used to denote protection and well being of political entities, as in the term “national security”. It may also refer to industrial protection by the security or protection departments. Police forces having limited responsibilities are also sometimes called security forces. The standard lexical definition equates security with freedom from danger, fear, anxiety, uncertainty, economic vicissitudes, and so forth (Parker, 1981, p. 39). Granat (1998) argued that the term “security” might mean different things to different people. To some of them it is a concern for preserving the “date” integrity of existing database records into the new millennium; to others, it is securing privacy for proprietary and restricted information; to yet others, it means preserving original records and protecting their integrity. The International Information Technology Guidelines issued by the International Federation of Accountants (IFAC) in 1998 stated that, “The concept of security applies to all information. Security relates to the protection of valuable assets against loss, disclosure and damage. In this context, valuable assets are data and information recorded, processed, stored, shared, transmitted, or retrieved from electronic media. The data or information must be protected against harm from threats that will lead to its loss, inaccessibility, alteration or wrongful disclosure”. However, most of the literature defines information security as the protection of information confidentiality, integrity and availability. This definition is used as equivalent to “prevention against security breach”. Accordingly, information security could be also defined as “the prevention of the unauthorized disclosure, modification or withholding of information”.
Dr. Jae J. Lee, State University of New York, New Paltz, NY
This paper explores a Monte Carlo approach to deal with the parameter uncertainty in extracting signals from economic time series. This paper explores the use of Monte Carlo integration with Acceptance/Rejection Sampling. This approach assumes that a set of unknown parameters is a random vector so that the parameter uncertainty can be eliminated by integrating out the vector. By running a simulation study, this approach compares with the commonly used approach in terms of mean square errors. Many economic time series Zt (possibly after transformation) can be written as the sum of an unobserved signal component and a nonsignal component , namely, where the components and follow ARIMA model specifications. For example, in repeated sample surveys (Scott et al., 1977; Bell & Hillmer, 1990), is an estimate of the population value derived from standard sample survey methods, and is sampling error, with observable at time t. In seasonal adjustment (Box et al., 1978; Hillmer & Tiao, 1982), is an unobserved seasonal component and is an unobserved non-seasonal component, with observable at time t. Extracting signal component from the observed is to find a minimum mean square estimator (MMSE) of the unobserved signal component, St, and its mean square error (MSE) at some point in the sample. Conditional on the full set of observations Z =, the MMSE of the signal component, , is its expectation and the MSE is its variance, namely, where is a vector of parameters in ARIMA specifications of and (Harvey, 1993). MMSE and MSE of signal component can be obtained with the Kalman filter/smoother with ARIMA specifications of signal and nonsignal components. In practice, however, since ARIMA specifications of the components are usually unknown, ARIMA specifications of the components must be estimated from information available. Forms of ARIMA models are identified using information of observed time series and knowledge about nonsignal component such as sampling errors. After forms of ARIMA models of the components are identified, a commonly used approach is to use a maximum likelihood estimate (MLE) of the vector . Then, using the identified models with the MLE of the vector, the MMSE for and , and their MSE are obtained by applying Kalman filter/smoother. More details are found in Dagum et al. (1998) and Pena et al. (2001). Therefore, in practice, extracting signal component contains two sources of uncertainty: model uncertainty due to the fact that ARIMA models for and are unknown and parameter uncertainty due to the need to estimate the parameters in the identified ARIMA models. In this paper, an approach to deal with parameter uncertainty in extracting signal component is investigated.
Toward An Interdisciplinary Organizational Learning Framework
Dr. Tony Polito, East Carolina University, Greenville, NC
Dr. Kevin Watson, Marist College, Poughkeepsie, NY
Organizational learning theory is multidisciplinary. There is no current consensus regarding a model for organizational learning theory. Even a consistent definition of organizational learning has been elusive within the literature, though typologies of organizational learning theories are found. This paper searches for points of agreement regarding organizational learning among organizational theorists, then gives special attention to the economic perspective of organizations and learning. Organizational learning is comprised of both behavioral and cognitive processes; higher-level learning, unlike lower-level learning, involves adaptation. Researchers disagree as to whether either change or effectiveness are requisite to organizational learning. Researchers generally agree that organizational learning that does effect change involves systematic shock anticipated by tension, but differ regarding the constitution of that shock and tension. Specific economic perspectives of the firm can also provide a framework for organizational learning theory. Much of the neoclassical theory of the firm, a set of human‑resource holders maximizing profit under a known production function, is under question. Organizational theorists now generally embrace the relevant transaction cost and agency perspectives. Harvey Leibenstein, Harvard economist, views the firm in terms of internal efficiency, embraces Argyris & Schön’s perspective of organizational learning as a process of error handling, sees the individual actor’s motivation to admit, detect and correct error as a special case of the productivity problem, and analyzes it from a game‑theoretic, agency‑like manner. Leibenstein’s perspective respects much of the noted concordance regarding organizational learning. Organizational learning theory is multidisciplinary (Dodgson, 1993). Within the literature, researchers note the relevance of psychology, organizational theory, innovation management, strategic management, economics, organizational behavior, sociology, political science, information systems, anthropology, and production/industrial management (Argyris & Schön, 1978b; Dodgson, 1993; Fiol & Lyles, 1985; Leibenstein & Maital, 1994; Perrow, 1986; Shrivastava, 1983). In fact, Argyris and Schön type organizational learning theories parallel to types of associated disciplines (Argyris et al., 1978b). There is, however, a noticeable absence of a multidisciplinary synthesis of organizational learning research (Huber, 1991). Dodgson believes such a synthesis will serve to avoid the introspective and parochial views seen in the existing literature and that synthesis is requisite for future research (Dodgson, 1993). This paper searches for points of agreement among organizational theorists, gives special attention to the economic perspective of organizations and learning, and focuses on Leibenstein’s perspective as a point of intersection. There is no current consensus regarding a model for organizational learning theory.
The dilemma of Governance in Latin America
Dr. José Gpe. Vargas Hernández, Centro Universitario del Sur,Universidad de Guadalajara, México
The last decades of the 20th century have seen the institutions of governance in Latin American countries affected by small macroeconomic achievements and reduced economic growth, and the development of an extremely fragile democracy. The implanting of the new model of neoliberal state consolidation has come at high cost, and has not produced either the expected strengthening in the political, economic and social spheres, or the expected gains in efficiency, equity and freedom. This so-called economic liberalization has generated institutional instability in the structure and functions of the state, limiting the reaches of democracy and legality, and ensuring that the effects of the associated managerial orientation which has transformed public administration are largely negative. Looking forward into the 21st century, a pessimistic prediction is that these tendencies will continue, producing similar unstable mixes of democratic populism and oligarchic pragmatism. More optimistically, the Latin American states may come to see that genuine social development is necessary for sustained economic growth, and introduce policies to achieve that outcome. The globalization processes surprised Latin American countries because they didn't have the political-economic mechanisms and the necessary institutions to assimilate its effects in such a way as to achieve social justice in the distribution of the wealth that was created. The challenges posed for Latin America by globalization require a further revision of the romantic utopias that came first with the Bolivarian independence of the early 1800s and subsequently with several popular revolutions in various parts of the region. Whatever its benefits, globalization clearly has perverse effects. The 100 biggest transnational companies now control 70% of world trade, although a significant relationship does not exist between the growth of world trade and world gross product. The volume of the financial economy is 50 times more than that of the real economy.
Changes of Economic Environment and Technical & Vocational Education in Korea
Dr. Namchul Lee, Korea Research Institute for Vocational Education & Training, Seoul, Korea
Dr. Ji-Sun Chung, Korea Research Institute for Vocational Education & Training, Seoul, Korea
Dr. Dennis B. K. Hwang, Bloomsburg University, Bloomsburg, PA
The purpose of this paper is to investigate the changes of economic environment and in technical & vocational education since 1985. This paper provides the basis of annual updates and identifies trends of implemented policies in the field of technical and vocational education. In addition, this paper is intended to provide useful information about the current status and future direction of technical vocational & education in Korea for government policy makers and school educators. Expected changes in the industrial structure of the nation would require changes in the emphasis and weight-age given to the technical vocational education (hereafter TVE) system in Korea. The Korean economy has been transforming from manufacture industry to a knowledge-based structures owing to the continuous development of new technology, especially information and communication (OECD, 1996). These trends have two important implications for technical and vocational education programs. They signal an ongoing shift in the education and training fields that are required of the Korean workforce as well as shifts in the levels of the education and training. TVE programs that prepare students for knowledge-based manufacturing jobs include high technology, medium-high technology, information communication technology (ICT), finance, business, health, and education. In Korea, the TVE programs under the formal education system are provided at both high schools and junior colleges (Ministry of Education, 2001). In this paper we aim to review literatures and statistical data on this topic by studying changes of industrial structure, labor force participation, and TVE programs since 1985. Understanding labor market trends provides a context for analyzing trends in TVE. For example, if participation in TVE programs parallels changes in the economy, one would expect to see a decline in enrollments in agriculture and manufacturing programs in recent years and an increase in enrollments in service and information communication technology related programs. The major purpose of this paper is to provide a picture on the basis of annual updates and to identify trends of implemented policies in the field of TVE. Also, this paper provides a working tool for analysis and policy-making in the field of TVE. In addition, this paper is intended to serve as a working tool for policy analysis and policy formulation for the policy makers in the field of TVE. The remainder of this paper is organized as follows. Section 2 briefly reviews the literature in the relationship between employment and changes in the industrial structure in Korea. Section 3 presents the major TVE trends in terms of enrollments in vocational high school and junior college. Section 4 shows the TVE outcomes by employment.
An Exploratory Analysis of Customer Satisfaction
Dr. Turan Senguder, Nova Southeastern University, Ft. Lauderdale, FL
Satisfaction is the consumer's fulfillment response. It is a judgement that a product or service feature, or the product or service itself, provided a pleasurable level or consumption-related fulfillment, including levels of under or over fulfillment. Here, pleasurable implies that fulfillment gives or increase pleasure, or reduces pain as when a problem in life is solved. Dissatisfaction is the displeasure of underfulfillment can be dissatisfying. It is well known among marketers of "style" goods that one purpose of new products is to create dissatisfaction with the prevailing style - a common strategy of automobile companies through the release of new models. A first-time consumer: Imagine a consumer with no experience in buying a particular product. Having an interest in its purchase, the consumer might read advertisements and consumer guides to acquire information. This information, usually regarding benefits the product will deliver, provides the consumer with expectations about the products likely performance. Because a number of suitable alternatives are available, this consumer must choose among them. Thus choosing one alternative requires that consumer forgo the unique feature of the others. This creates two problems. First, the consumer may anticipate regret if the chosen alternative does not work out as well as other choices might have. Second, until the consumer has consumed, used, or sufficiently sampled the product (as in driving a car over a period of time), an apprehension or tension, known more commonly as dissonance, will exist over whether the choice was best. Once the product is used and its performance evident, the consumer is in a position to compare actual performance with expectations, needs or other standards resulting in an expectation-performance discrepancy. Having purchased a product previously, the consumer has probably developed an attitude toward it. Here an attitude is fairly stable liking or disliking of the product based on prior experience. It is also possible that an attitude can develop based on prior information without experience, as when consumers develop biases for or against brands based on the brand's images in the marketplace.
A Perspective on Team Building
Dr. Jean Gordon, Barry University, Miami, Florida
The way we work is changing. Middle management has been reduced to its lowest level and organizations are flattening their structures causing generation of new business process methodology. Many of the changes being experienced are a result of restructuring, mergers and acquisitions, global competition and changing work trends. Teams are becoming more of the norm in this new workplace and teams are seen as one way of leveraging organizational strengths to offset new challenges. Research has shown that team building takes time and effort to produce systematic, lasting results. Furthermore, teams are beginning to change the way workers work and organizations are beginning to realize that a “we” culture may better suit business needs than the traditional “I” culture. This research seeks to outline the characteristics of successful teams and to expand on theorists who believe that knowledge of how people work will enhance team-building efforts. The purpose of this paper is to generate new ideas on team building while expanding on existing research and processes. Let us begin with a definition of “team” in the context of the workplace. Teams can be defined as small groups of people committed to a common purpose, who possess complimentary skills and who have agreed on specific performance goals for which the team holds itself mutually accountable (Katzenbach and Smith). Effective teams must have individuals with complementary skills in order to meet ever-changing needs of both internal and external customers. Further, effective teams must have specific goals to strive for which allow mutual accountability. Finally, teams should be composed of a small number (preferably an odd number, i.e. 5 or 7) of people to ensure consensus without discord. First, let’s look at what makes a team successful. The Pfeiffer Book of Successful Team-Building Tools (Biech, p. 13-26) gives ten (10) characteristics of successful teams: Clear Goals – allow everyone to understand the function and purpose of the team. Defined Roles – allow team members to understand why they are on the team and enables clear individual and team-based goal setting. Open and Clear Communication – considered the most important aspect of team building, effective communication hinges on effective listening. Effective Decision Making – for a decision to be effective, the team must be in agreement with the decision and must have reached agreement through a consensus finding process. Balanced Participation – ensures that all members are fully engaged in the efforts of the team. Participation is also directly linked to leader behaviors.
Factors Affecting Customer Loyalty in the Competitive Turkish Metropolitan Retail Markets
Dr. Altan Coner, Yeditepe University, Istanbul, Turkey
Mustafa Ozgur Gungor, Yeditepe University, Istanbul, Turkey
Dynamic behavior of markets, adaptation to diverse social segments and flexibility needed for each individual consumer are the challenges marketing face today. One of the main drivers of this one-to-one marketing era is the increasing capability that technology renaissance of 1990s brought forward, and the other is the evolution of customer relationship management (CRM). The need to build effective relationship with the customer became more essential for the businesses to remain competitive. Therefore, CRM is implemented as a combination of the managerial and marketing issues with detailed collection and analysis of customer related data. Moreover, the importance of the understanding of behavioral moves of customer to develop a better relationship in response to the need to keep them satisfied brings forward customer loyalty management as a critical issue for any business acting in highly competitive markets. This paper is a presentation of the findings of a research focused on the examination of the factors acting on customer loyalty. The research has been carried out in the metropolitan Turkish retail markets where intense rivalry exists. After it was introduced in the 1960s, the paradigm discussing the concepts of marketing mix and the four Ps of marketing – product, price, place and promotion – became the major arena of the marketing science for the last several decades. This mainstream was followed by many researchers and developed in various dimensions after its first introduction. The detailed discussions about each P were covered in many titles (McCarthy, 1960; Kotler, 1991; Boone and Kurtz, 1995; Kotler and Armstrong, 1999). Kotler (1991) detailed one of the most comprehensive coverage of these four Ps in depth around 1990s. Kotler explained this fundamental four P model by using services and communication marketing terminology with additional key aspects of this theory. However these aspects were extensively proposed in the formulation by Borden (Grönroos, 1994) long before Kotler and summarized in the final model. Kotler, by using the generalized model, discussed contemporary marketing issues in the interdisciplinary manner. This model found the most popular implementation methodologies and was widely applied in the 1990s after Kotler’s additions. On the other hand, the burden theories of marketing were totally out of the scope of the business world until the Internet era splashed. E-Business became the new paradigm in the life span of the industries, and businesses and individuals who interact, are related, mobilized, and customized. Although these terms were not new to the marketing, the rapid change of the media, change of the production processes, and change of the nature of the distribution challenged the four Ps.
Dr. Charles A. Rarick, Barry University, Miami, FL
Marianne Whitaker is very concerned about the success of one of her new account representatives, Pongpol Chatusipitak, a Thai national who has worked for her at Premuris Investments for only six months. Pongpol does not appear to Marianne to be very motivated and some of his behavior seems odd to her. A decision must be reached concerning his future with the company. The primary subject matter of this case concerns the cross-cultural difficulties found in managing a foreign expatriate in the United States. Issues particularly relevant to cross-cultural difficulties found between Thailand and the United States are emphasized. It was a typically beautiful day in Southern California as Marianne Whitaker peered out her office window at Premuris Investments to the streets below. Marianne was not able to enjoy the scenery, as she was very concerned about the performance of one of her financial services advisors, Pongpol Chatusipitak. Pongpol had been hired six months ago to help generate increased business from the large Thai business community of Southern California. Pongpol had not generated much business in the first few months, but recently his performance had improved. Marianne was also concerned about some of his personal and work behaviors. Marianne felt that she would like to fire Polpong, however, this choice may not be an option. She wondered out loud where she had gone wrong and if anything could be done to improve the situation. Pongpol Chatusipitak, or “Moo” as he liked to be called, was from Chaing Mai in northern Thailand. Pongpol graduated from Chulalongkorn University in Thailand with a degree in economics. After working for a Thai bank for three years he enrolled in the graduate program at the University of Southern California to study finance. Upon completion of an M.B.A. degree from USC, Pongpol was hired by Premuris as a financial services advisor. Marianne Whitaker remembers how she was struck by the warm and easygoing nature of Pongpol. He seemed to have a perpetual smile and appeared very conscientious. Pongpol did not have an outstanding academic record at USC, however, Marianne had discounted the importance of grades and was more concerned with what she considered to be a strong work ethic in Asian people. The fact that Pongpol had an M.B.A. from a respected school, and spoke fluent Thai, made him a good candidate for the position.
The Correlative Relationship between Value, Price & Cost
Dr. Richard Murphy, President, Central Marketing Systems Inc., Ft. Lauderdale, FL
A consumer goes to an electronics store to purchase a new television set. The consumer spends almost an hour listening to the salesperson, looking at and comparing different models. The consumer selects a model priced at $585. Did the television cost the consumer $585? Many people would answer "yes" because that was the price of it. But, there is a difference between the actual dollars charged as the price and the cost to the consumer. That customer had time and energy involved in the purchase in addition to the number of dollars paid. The cost to the consumer, then, must include all the resources that were used to make the purchase. Today's consumer is bombarded with advertisements in all media, direct mail offers, and telemarketing offers for telephone long distance service. One of AT&T's ads boasts a rate of 7 cents per minute for long distance. The price is 7 cents but that is not the cost. Whether the ad is a commercial on television or an ad in a print document, there is a small caveat printed - the consumer will be billed a monthly charge of $5.95 per month if they sign up for this long distance rate (Teinowitz, 1999). The actual cost to the consumer is a good deal more - $5.95 per month plus the 7 cents per minute. This is the difference between the price and the cost. We take this concept one step further in this paper – What was the value? Did the value equal the cost? There are numerous factors involved when we begin to discuss the issues of value, cost and price. The value of anything is perceived by the customer, not the manufacturer or the vendor. Value is an abstract construct that the consumer determines based on a number of factors. The degree of risk in the purchase is also a factor in perceived value. Consumers must perceive that they receive a higher value from one vendor or from one product than another in order to purchase it. The cost includes the actual price of the product or service but it also includes the 'hidden' costs, such as the time it takes to travel to the store or the time it takes to complete the transaction. The following pages more fully discuss the issues of cost, price and value. The marketing mix includes those variables that the marketing department can control in advertising a product or service. It is intended to convey to the consumer the value to them if they purchase this product or service. When this concept was first designed, it was called the 4Ps – product, place/distribution, pricing and promotion (Dennis, 1999). They represent the marketers’ bag of tools, an armory that can be manipulated to gain a competitive advantage over competitors (Carson, 1998; Dennis, 1999).
The Accounting Crisis as a Result of the Enron Case, and Its Implications on the U.S. Accounting Profession
Dr. Dhia D. AlHashim, California State University, Northridge, California
In a free-enterprise economy, the integrity of the economic system is crucial to investors’ confidence. Lately, with the discovery of so many business scandals investors’ confidence in the corporate system and the accounting profession has eroded. The purpose of this research is to investigate reasons for the recent business scandals, particularly that of Enron Corporation, the impact on the U.S. accounting profession, and the lessons learned for developing nations. On August 14, 2001 Mr. Jeffrey Skilling, CEO, resigned from Enron; on November 8, 2001 Enron restated its earnings for the years 1997 through 2000. On November 30, 2001 Enron filed for bankruptcy protection. Enron wiped out $70 billion of shareholder value, defaulted on tens of billions of dollars of debts and its employees lost their life savings (pension plan consists of Enron’s stock). The question is: Why Enron collapsed? There is only one answer, in my opinion, and that is: derivatives! A major portion of these derivatives relates to the now infamous “Special Purpose Entities (SPEs).” Enron Corporation was one of the pioneers of energy deregulation and became a major force in the trading of energy contracts in the U.S. and overseas markets. Last year, the company was considered the seventh largest company in the U.S., with revenues exceeding $150 billion and assets of more than $60 billion. It handled about one quarter of the U.S.’s traded-electricity and national gas transactions. However, it appears that Enron’s success was not entirely due to the brilliant business strategies developed by its former chairman Ken Lay. As the unraveling scandal shows, a significant portion is attributable to innovative financing and accounting strategies. There is no question that the continuation of deregulation of the economy and the privatization of services depends on the integrity of financial reporting systems. Integrity can be achieved by having a fair and transparent accounting system. It is alleged that accountants are compromising their integrity, by manufacturing company’s earnings, for the sake of obtaining a piece of the act! Observing unusual business events recently, leads us to the conclusion that it is not only Enron who is manufacturing earnings and hiding debts in subsidiaries and partnerships, with help of their accountants, many other U.S. companies are hiding trillions of dollars of debt in off-balance-sheet subsidiaries and partnerships, such as UAL ($12.7 billions), AMR-parent of American Airlines ($7.9 billions), J.P. Morgan Chase ($2.6 billion), Dell Computer ($1.75 billion), and Electronic Systems ($0.5 billion). This research investigates the impact of these recent business scandals, particularly that of Enron Corporation, on the U.S. accounting profession, with possible lessons learned for developing countries. Enron’s goal of becoming “the world’s greatest company” required a continuous infusion of cash. This in turn demanded favorable debt/equity ratios and high stock prices.
The Relationship Between Dividends and Accounting Earnings
Dr. Michael Constas, California State University, Long Beach, CA
Dr. S. Mahapatra, California State University, Long Beach, CA
This research examines the relationship between dividends and earnings. The model used here is a variation of the model tested in Fama and Babiak (1968), which has not been altered by other empirical literature. The importance of this model is underscored by Copeland and Weston (1988), Kallapur (1993), and Healy and Modigliani (1990), which was used to examine the influence of inflation in dividend policy. This research, however, differs from the Fama and Babiak model in important respects. The Fama and Babiak model is linear, while the model tested in this research is a linear logarithmic transformation of a nonlinear relationship. The Lintner (1956), and Fama and Babiak (1968) model has an additive error term with a normal distribution, whereas the model tested herein assumes that the underlying relationship has a multiplicative error term with a lognormal distribution. The empirical results reported in this paper reflect an improvement over the results obtained by using the original Fama and Babiak (1968) model. The Fama and Babiak (1968) study involved running separate regressions for each firm. In the revised model (used here), the cross-sectional parameters are significant, and, in both cross-sectional and separate firm regressions, the revised model produces higher adjusted R2s than is produced by the Fama and Babiak model. The Fama and Babiak (1968) model is based upon the premise that a firm’s current year’s dividends reflect its current year’s earnings. The prior year’s dividends are subtracted from both the current year’s dividends and earnings in order to produce the change in dividends as an independent variable. The empirical results reported here, however, suggest that the presence of the prior year’s dividends as an independent variable is an important part of the relationship between dividend changes and earnings changes. Current dividends appear to be adjusted when a firm experiences earnings that are inconsistent with prior dividend declarations. This adjustment can be explained in two ways. First, it may be that a firm readjusts its dividends when it experiences inconsistent earnings because its ability to pay dividends has changed. Second, the adjustment may be due to the fact that dividends serve as management’s signal as to how the firm is expected to perform in the future, and this signal changes due to new information.
Trade and the Quality of Governance
Dr. Fahim Al-Marhubi, Sultan Qaboos University, Sultanate of Oman
Different strands of the trade and governance literature imply a link between the openness of an economy to international trade and the quality of its governance. This paper tests this proposition using a newly created dataset on governance that is multidimensional and broad in cross-country coverage. The results provide evidence that the quality of governance is significantly related to openness in international trade. This association is robust to alternative specifications, samples, and governance indicators. The last decade has witnessed an explosion of research on economic growth. Two issues that lie at the heart of this research include the role of international trade and that of governance in promoting growth and better development outcomes. However, due to conceptual and practical difficulties, these two lines of research have run in parallel without explicit recognition of each other. Conceptually, the relationship between openness and governance has been left rather imprecise, with a notable absence of a convenient theoretical framework linking the former to the latter. Practically, the difficulty lies in defining governance. While it may appear to be a semantic issue, how governance is defined actually ends up determining what gets modeled and measured. For example, studies that examine the determinants of governance typically tend to focus on corruption (Ades and Di Tella, 1999; Treisman, 2000). However, governance is a much broader concept than corruption and little has been done to address the other dimensions of governance discussed in the next section. The purpose of this paper is to investigate more systematically the link between the openness of an economy and the quality of its governance. A practical difficulty that arises, however, in trying to estimate openness’ exogenous impact on governance in a cross section of countries is that the amounts that countries trade is not determined exogenously. Openness may be endogenous since it is quite likely that countries that can manage risks and exploit opportunities from trade because of their high quality governance choose or can afford to be more open. Hence, better governance can lead to greater openness rather than the other way round. As a result, correlations between openness and governance may not reflect an effect of trade on governance.
Caribbean Economic Integration: The Role of Extra-Regional Trade
Dr. Ransford W. Palmer, Howard University, Washington, DC
This paper examines the feed-back effect of extra-regional trade on intra-regional imports of the Caribbean Community (CARICOM). Because of the non-convertibility of CARICOM currencies, intra-regional trade must be settled in hard currency, typically the U.S. dollar. It is argued that the growth of extra-regional trade generates foreign exchange which stimulates the growth of gross domestic product and intra-regional imports. Over the past thirty years, there has been an explosion of common market and free trade arrangements around the world, all of them designed to foster trade and promote economic growth. NAFTA and the European Economic Community are the two dominant ones. But in Africa and Latin America there are numerous others. Theoretically, the benefits from these arrangements seem particularly attractive for groupings of developing countries, but in practice numerous obstacles tend to hinder their full realization. This is particularly the case of CARICOM, a grouping of small Caribbean economies where the benefits tend to be constrained by their small size and openness, among other things. This paper examines the impact of extra-regional trade on the economic integration effort. After the failed attempt at political union in the English Caribbean in 1961, the search for economic cooperation led to the creation of the Caribbean Free Trade Association (CARIFTA) in 1969. In 1973 the Treaty of Chaguaramas replaced CARIFTA with the Caribbean Community and Common Market (CARICOM) and set the following objectives (Article 3 of the Annex to the Treaty): the strengthening, coordination and regulation of economic and trade relations among Member States in order to promote their accelerated harmonious and balanced development; the sustained expansion and continuing integration of economic activities, the benefits of which shall be equitably shared taking into account the need to provide special opportunities for the Less Developed Countries; the achievement of a greater measure of economic independence and effectiveness of its member states, groups of states and entities of whatever description. In the three decades since 1973, efforts to achieve these objectives have been buffeted by major external shocks.
What’s in an Idea? The Impact of Regulation and Rhetoric on the US Food Supply Chain
This paper seeks to explore the relationship between government and business through an examination of regulation pertaining to the US agri-food sector. It will be argued that regulation can act as a power resource, determining who appropriates value in the supply chain. However, political intervention in the market creates differentially advantageous positions for some to the detriment of others and, as such, the political allocation of rents is a dynamic process in which firms compete to control this allocation. Thus, a further argument of this paper is that other power resources are available to firms which can be used as countervailing sources of power to undermine and overturn regulation. In particular, the paper will focus on the role of ideas as ‘weapons’, which can be used by firms as resources to overturn unfavourable regulation. This paper will argue that the policy changes brought about under the 1996 Farm Bill (which replaced the New Deal-era target price/deficiency payment structure for feedgrains, wheat, cotton and rice with ‘production flexibility contract payments, thus decoupling the payments from either the commodity price or the amount of croup produced) could only be brought about by a corresponding change in the ideas which underpinned agricultural policy. It will be argued that these policy changes, which favoured agribusiness interests at the expense of production agriculture, were the result of a long-term campaign waged by agribusiness to change the terms of debate within which US agricultural policy was framed. Although ‘decoupling’ had been on the agricultural agenda since as early as the 1950s, the paper will argue that more wholesale changes did not occur earlier because: (1) production agriculture acted as a countervailing interest to agribusiness; and, (2) the farm fundamentalist ideology had become “locked-in” to the AAA, and had become ‘cemented’ institutionally. However, by the 1980s, the agri-food supply chain had become increasingly integrated, with agribusiness assuming far more influence over policy direction than production agriculture and, as such, could more rigorously work to discredit the farm fundamentalist ideology.
Reform and Corporate Governance in Russia
Dr. Jennifer Foo, Stetson University, Deland, FL.
This paper looks at some issues in enterprise restructuring and reforms in Russia. The paper looks at the characteristics of privatization and the Russian corporate governance or the lack of it. The issues of corporate governance and enterprise reforms are particularly important for transitional economies when confronted with the realities of market discipline and global competition. This paper also looks at the efforts to establish a corporate governance system in Russia. An empirical investigation was performed to compare Russia's transition progress to that of other eastern bloc countries such as Poland and Hungary. An investigation of Russia's enterprise reforms and corporate governance may also stimulate institutional changes in Russia and other former socialist countries. In the past decade, post-communist countries of Russia and Eastern Europe have carried out transitional reforms. Efforts have been made to privatize state-owned enterprises (SOEs) by transferring ownership to the private-sector owners. The initial transition efforts paid off in significant gains in real GDP growth for most of the transition countries as Table 1 indicates. Russia, however, experienced negative growth rates and insignificant growth after the transition. The past decade has shown that countries like Poland, Hungary and the Czech Republic are weathering the transition relatively well while Russia and Romania are encountering serious transitional problems. Privatization, in itself, is insufficient to effect a successful transition to a market economy. What is needed is effective privatization complemented by structural reforms in the legal sector to support and enforce the reforms. Privatization has to occur if a post-communist country is to transform its planned, state-owned economy to a market economy. Privatization promotes economic growth when shareowners have an incentive to maximize wealth through firm value. Successful privatization has to consider reforms in three general dimensions: an effective corporate governance system, policies that support business enterprise, and a legal system that protects stockholder rights. The initial phase of privatization is not expected to be the most optimal as evidenced by the negative real GDP growth of most the transition countries. Poland, Hungary, the Czech and Slovak Republics have experienced consistent positive growth in real GDP in the second half of the decade since the transition process began. However, Russia and Romania have the least progress. In Bulgaria and Romania, where the transition governments are weak, and in Russia where there is greater political instability, the privatization programs opened up opportunities for managers to strip enterprise assets and maximize personal cash flows.
Information Communication Technology in New Zealand SMEs
The New Zealand Government has shown a concern to promote the use of information communication technology (ICT) by New Zealand small to medium size enterprises (SMEs). There has been an enquiry into telecommunication regulation and an ongoing commitment to an E-summit programme. The latter involves both Government and enterprise in ongoing dialogue and public fora. In the May 2002 Budget for fiscal year beginning July 1, 2002 Government is introducing a new regional broad banding initiative to assist where private telecommunication companies find it not profitable to upgrade the infrastructure. This study investigates the perceptions of SMEs, as solicited through a quarterly SME survey conducted for the Independent Business Foundation. The survey is now into its third year and provides the opportunity for monitoring changing sentiments and addressing new issues as and when they arise. The perceptions of various groups integrally involved with the small medium enterprises (SMEs) sector, regarding information communication technology (ICT) are analysed in this paper. The Economist Intelligence Unit/Pyramid Research (EIU) study (2001) (www.ebusinessforum.com) into levels of E-preparedness ranked New Zealand 20th down from 16th the year before. While the impact of ICT across the whole business sector is important, it is essential that the SME sector, including the micro businesses, should capture some of the efficiency gains. Government has continued to push ICT but there has been an increasing disquiet that business is not moving quickly enough to catch the knowledge wave. Science Minister, Hon Peter Hodgson, addressing a pharmaceutical conference in March 2002, observed “I have watched us miss the ICT bandwagon, if I can be blunt. And it’s not going to happen again.” (New Zealand Herald, p E3).Trade NZ, a government department, notes the importance of unleashing the potential gains from ICT for SMEs in underpinning their recent programme of assistance: New Zealand has no other option but to adopt e-business and increase participation of its SMEs in the global economy. E-Business has the potential to expand the country’s current exports and grow the number of new exporters. Since uptake of true e-commerce is slow among exporters and other companies, the New Zealand Trade Development Board (Trade New Zealand) has taken on a leadership role through a NZ$10 million project supported by additional funding from the Government. (Trade NZ 2001). In the absence of a commercial imperative or a large stick/carrot regime it may be relatively easy to succumb to complacency in times of reasonable economic growth. Currently, agricultural exports are doing relatively well given the higher international prices for commodities.
The Internationalisation Process of UK Biopharmaceutical SMEs
Dr. Cãlin Gurãu, School of Management, Heriot-Watt University, Riccarton, Edinburgh, Scotland
The classical models and theories of internationalisation have considered export and out-licensing activities to be the main modes of entry on international markets. The structural changes in the global economy, the emergence of high technology industries and the increased involvement of SMEs in international activities are challenging these theories. The development cycle of new products and technology has become long, complex and extremely costly. The lack of specialised resources on domestic market has forced the high-technology SMEs to initiate early their internationalisation in order to access essential complementary resources on a global basis. This paper investigate the internationalisation model and the entry modes of UK Biopharmaceutical SMEs. Gurău and Ranchhod (1999) have shown that biotechnology is an industrial sector in which internationalisation is likely to occur, because: (the sources fuelling the biotechnology industry are international (i.e. finance, knowledge, legal advice, etc.) (Acharya, 1998 and 1999; Russel, 1988); (the marketing of biotechnology products and services is international (Acharya, 1999; Daly, 1985); (the competition in the biotechnology sector is international (Acharya, 1999; Russel, 1988); (the international community (Acharya, 1999; Bauer, 1995; Nelkin, 1995; Russel, 1988) closely scrutinizes the scientific or industrial developments in biotechnology. The large pharmaceutical and chemical corporations which began to diversify their activity into biotechnology from the early eighties, had the managerial expertise and the financial resources to develop this activity on a global basis (Daly, 1985). They used their existing networks of international assets in order to solve the problems related to the novel technologies and emerging markets and to defend their dominant position within the industrial markets (Chataway and Tait, 1993; United Nations, 1988). On the other hand, the small and medium-sized biotechnology enterprises (SMBEs) are confronted with important problems in their process of internationalisation: limited financial resources, the management and processing of huge amounts of information, restrictive regulations, unfamiliar market environments, etc. These represent significant entry barriers on the foreign markets (Acs et al., 1997; Chataway and Tait, 1993; Daly, 1985; OECD, 1997). In spite of these problems, the global competition and the structural limitations of their domestic market compel them to become international (Acs and Preston, 1997; Fontes and Coombs, 1997; Daly, 1985).
Impact of Company Market Orientation and FDA Regulations on Bio-Tech Product Development
Dr. L. William Murray, University of San Francisco, CA
Dr. Alev M. Efendioglu, University of San Francisco, CA
Dr. Zhan Li, University of San Francisco, CA
Paul Chabot, Xis, Inc., San Francisco, CA
New products produced by Bio-Technology firms – products designed to treat, or cure, human illnesses – require large investments ($150 million +) and take a long time (10-12 years) from idea generation through product launch. These products require full authorization by the U.S. Food and Drug Administration (FDA) before the developing firms are permitted to sell them for use by patients. Little is known about the management processes by which these products are developed. Even less is known about the impact of the FDA regulation on the manner in which these products are developed, produced, and distributed. The purpose of this paper is to report the results of a recent survey of professionals employed by Bio-Tech firms to develop new products. The FDA must approve all new pharmaceutical and medical device products designed for use by individuals. A firm interested in developing a new pharmaceutical product must file an application with the FDA, state the goal and define the approach towards discovering possible new products, and provide the FDA with a detailed statement as to how the development process will be managed. If approved, the firm can take the first steps towards developing the product, each step of which must be recorded, analyzed, and summarized in performance review reports to the FDA. Three earlier studies researched the possible impacts of FDA regulations on and marketing of new products. An earlier study of development and production process of diagnostic-imagining equipment suggested that for this type of medical devices FDA regulation had little effect. A later study by Rochford and Rudelius (1997) suggested that there are regulatory influences and impacts on product development, if one examines the number of development activities (i.e., stage gates) that the firm performed in their development of a new product. A more recent third study of medical device producers by Murray and Knappenberger (1998) further elaborated the relationships between product regulation, the manner in which the product was developed, and the market success of the new product. It concluded that the act of regulation increased the new products “time to market”; i.e., the amount of time it took the firm from idea generation through final product launch.
Country-of-Origin Effects on E-Commerce
Dr. Francis M. Ulgado, DuPree, Georgia Institute of Technology, Atlanta, GA
This paper examines the Country-of-Origin effects in an e-commerce environment. In addition to Country-of-Brand and Country-of-Manufacture effects, the paper investigates the presence and significance of Country-of-E-commerce Infrastructure. It develops hypotheses regarding such effects amidst varying customer and market environments, such as business vs. consumer buyers, levels of economic development and product type, and proposes a methodological framework to test the hypotheses. Recent years have witnessed a rapid increase in the range of multimedia technologies available internationally. Among them, the Internet technology has dramatically changed the shopping environment for individual consumers and businesses throughout the globe. The number of consumers worldwide purchasing through business-to-business as well as business-to-consumer e-commerce media ("e-commerce hereafter) has been skyrocketing these days. However, preliminary statistics indicate that the level of growth and development of internet and e-commerce infrastructure varies across countries and has generally lagged behind the United States. Meanwhile, current research has also indicated the continued prevalence of country-of-origin effects on consumer perception on products or services that they purchase. This study investigates the presence and significance of country-of-origin effects on buyer perception in the e-commerce environment. While, country-of-brand and country-of-manufacture dimensions have been investigated in the past, this paper adds country-of-e-commerce infrastructure effects. These three variables are selected to be examined under different business-to-business, business-to-consumer, and level of development environments. The size of the worldwide market for e-commerce was about 66 billion dollars in 1999 and is expected to grow to about 1 trillion dollars this year. In the U.S. alone, this is expected to reach $33 billion by the end of this year (Nielsen//Net Ratings Holiday, E-Commerce Index, 1999). While this significant global growth is widely expected and documented, it has also been observed that the rest of the world lags behind the United States. In contrast to the U.S. for example, regions such as Asia, Latin America, and Eastern Europe are behind in development and growth of e-commerce in terms of infrastructure, buyer acceptance, and use. Moreover, different countries themselves also exhibit varying degrees of growth and development relative to their neighbors in the same region. Even amongst developed countries such as Canada, Japan, and Western European nations, the U.S. remains far ahead of the game. It is therefore not surprising that according to recent studies, U.S. web sites such as Yahoo! or Amazon dominate the international market.
The FASB Should Revisit Stock Options
Dr. Ara Volkan, State University of West Georgia, Carrollton, GA
Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, firms to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. Moreover, FAS 123 did not require the quarterly calculation and disclosure of the option expense. The primary purpose of this paper is to highlight the flaws in FAS 123 and explore alternative methods of accounting and reporting for stock options that address these flaws. In addition, two studies that evaluate the impact of these alternatives have on annual and quarterly financial statements are analyzed. Finally, accounting procedures are recommended that will report more reliable and useful information than current rules provide. Given that two Congressional Subcommittees are intending to propose fair valuation and expensing of stock options when they finish their investigations into the Enron debacle, the content of this paper is both timely and relevant. Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. Subsequent pronouncements, Accounting Principles Board Opinion No. 25 (APBO 25) issued in 1972 and Financial Accounting Standards Board (FASB) Interpretation No. 28 issued in 1978, continued the tradition of allowing the fixed stock option plans avoid recording compensation expense as long as the exercise price was equal to or exceeded the market price at the date of grant. In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, companies to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. The firms that chose not to follow the recommendations of FAS 123 could continue to use the requirements of the APBO 25. These firms had to disclose the pro forma impact of FAS 123 requirements on their annual earnings and earnings per share (EPS) in the footnotes of their annual reports. However, FAS 123 did not require the quarterly calculation and disclosure of the option expense.
The world of marketing channels is changing. A deepened focus on the customer experience, micro segmentation, and the use of technology is leading to two key developments. First, an increasing number of companies are moving toward using flexible channel systems. Microsoft’s bCentral is an example of a company that has built a flexible channel system. The second development in the world of marketing channels is that companies are reaching customers using multiple media. These media afford marketers the opportunity to reach customers in the way they would like to be reached, and deliver an ever more customized buying experience to them. Avon is an example of a company that has moved from using just one way to reach customers to multiple in a span of few years. These two developments create a host of new challenges for marketers. They must decide whether to use flexible channel systems or use vertically integrated distributors/retailers. What criteria should be used to make these choices? And if a flexible channel system is used, what organizational changes should be made in order to work effectively with channel partners? What new skills and resources are needed by a marketer to work effectively with members of a flexible channel system versus vertically integrated distributors? The spread in the use of new media raises the difficult issue of how a marketer can integrate all of the various media to deliver an experience customers can actually enjoy. What does channel integration mean anyway, and how should this integration be realized?
Institutional and Resource Dependency Effects on Human Resource Training and Development Activity Levels of Corporations in Malaysia
Dr. Zubaidah Zainal Abidin, Universiti Teknologi Mara, Shah Alam, Malaysia
Dr. Dennis W. Taylor, University of South Australia, Adelaide, Australia
This study considers managerial motives and orientations affecting decisions about levels of employee training and development (T&D) activities. Specifically, arguments drawn from institutional theory and resource-dependency theory are used to articulate variables that seek to uncover these managerial motives and orientations. Using listed companies in Malaysia, a field survey was conducted amongst two groups of managers deemed to have influence on the determination of annual T&D budgets and output targets, namely, human resource (HR) managers and finance managers. The results reveal that T&D activity levels are affected by institutional-theory-driven components of management dominant logic and by perceived organizational resource dependencies on employees versus shareholders. But there are contrasts in the significance of these variables as perceived by HR managers compared to finance managers. In Malaysia, there is a relatively high level of investment in human resources (mainly training and development expenditure) by companies. The federal government’s Human Resource Development Fund (HRDF) was established in 1993. Its purpose has been to encourage and help fund human resource investment activities by companies. Through reimbursements of eligible T&D expenditures, the HRDF scheme in Malaysia provides corporate managements with a strong incentive to allocate budget expenditure to T&D programs and to report on numbers of employees trained and developed. But Malaysian companies have not been consistent in taking advantage of this government scheme. This is evidenced by variability in the ratio of levies collected to claims paid by the HRDF on a company-by-company basis, suggesting that corporate managements treat T&D activity levels as quite discretionary in their planning and annual budgeting. What factors influence management’s choice of the annual T&D activity level? This study will focus on whether the level of T&D activity is determined by variables embedded in institutional and resource-dependency theories. The motivation for addressing this research question is that insights can be provided about management behaviour in an operating functional area of the company (i.e., investment in human resources) that has economic or human consequences of relevance to employees, shareholders and government oversight bodies. To employees, T&D programs provide the means of maintaining their own competitiveness within their employer organization by improving knowledge, skills and abilities, especially if their current workplace environment is dynamic and complex (Lane and Robinson, 1995). To shareholders, T&D expenditure is seen as reducible in times of economic stringency in order to meet short-term profit targets, but the importance of knowledge and intellectual capital is also recognized as critical in business success (Pfeffer and Veiga, 1999).
Back-Testing of the Model of Risk Management on Interest Rates Required by the Brazilian Central Bank
Dr. Herbert Kimura, Universidade Presbiteriana Mackenzie and Fundação Getulio Vargas, São Paulo, Brazil
Dr. Luiz Carlos Jacob Perera, Universidade Presbiteriana Mackenzie and Faculdade de Ciências Econômicas, Administrativas e Contábeis de Franca FACEF, São Paulo, Brazil
Dr. Alberto Sanyuan Suen, Fundação Getulio Vargas, São Paulo, Brazil
The model proposed by the Brazilian Central Bank for interest rate positions represents the first attempt of the regulator to define a quantitative methodology to assess market risk of portfolios. Since the model allows discretion in the establishment of different criteria for interpolation and extrapolation of interest rates, it is possible that banks may reduce their capital requirements simply by using different methods of defining the term structure. This study will verify the impact of such methods in the assessment of interest risk, specially in the case of the Brazilian high volatile market. In addition, it will be discussed, through the presentation of simulations, if the model defined by the regulator can influence the willingness of financial institutions to assume more credit risk by lending to counterparts with poor credit ratings and by making more long term loans. Following guidelines suggested by the Basle Committee, the Brazilian Central Bank has regulated rules for capital requirements in function of the assumed market risk. Brazil initiated efforts to set an specific regulation related to the market risk with the emission of the legislation of the risk evaluation of positions exposed to the fixed interest rates fluctuation, according to the parametric model of variances and covariances. Having in mind the complexity of the risk factors in the Brazilian economy clearly subject to the major fluctuations of the market parameters, it is important to the Brazilian financial institutions to implement tools to evaluate risks, allowing a better estimation on the potential losses. Exemplifying the Brazilian economic scene, despite the relative success of the stabilization plan implemented in 1994 seeking to reduce inflation that reached more than 80% in March of 1990, the interest rate is still one of the highest in the world (around 20% per year), having reached 47% per year during the 1997 Asian crisis. Besides, in 1999 the Brazilian currency was devaluated almost in 50% in only one month, due to the investors’ crisis of confidence of the conduction of the economic politics. In such great volatility context, the major part of the Brazilian banking segment has implemented several methodologies to analyze the risk measurement both through the value-at-risk measures and through projections in stress tests. In function of the Brazilian economy specificities, the market practices have been more demanding in some requirements than the own international regulation. For instance, while the Basle Committee demands quarterly changes of the variance-covariance matrix, the Brazilian Central Bank determines daily risk parameters to the prefixed rates.
Analysis of Dynamic Interactive Diffusion Processes of the Internet and New Generation Cellular Phones
Dr. Kaz Takada, Baruch College/ CUNY, New York, NY
Dr. Fiona Chan-Sussan, Baruch College/ CUNY, New York, NY
Dr. Takaho Ueda, Gakushuin University, Tokyo, Japan
Dr. Kaichi Saito, Nihon University, Tokyo, Japan
Dr. Yu-Min Chen, J. C. Penney, Dallas, TX
NTT DoCoMo has been experiencing an unprecedented success with its i-mode cellular phone services in Japan. In this study, we analyze the diffusion of the i-mode and other second generation (2-G) cellular phones, and its dynamic interactive effect on Internet diffusion is modeled and empirically tested with the diffusion data. The empirical results clearly support the hypothesized relationship between the two indicating that in the short term the rapid diffusion of the 2-G has an negative effect on the diffusion of Internet. However, we contend that in the long run the diffusion of these technologies should exert positive and complimentary effect on each other. The introduction of NTT DoCoMo cellular phone i-mode services in 1999 has brought Japan to become the number one mobile commerce (m-commerce) nation by 2001. The success of the i-mode service is of such a phenomenon that every major newspaper and magazine has had at least one article written about its success in the last twenty four months (Barrons 2000; Business Week 2000; Fortune 2000, among others). How does the i-mode phenomenon affect the traditional Internet diffusion through the use of personal computers (PC), and how does it affect the future of Internet diffusion? The i-mode represents a new generation of the cellular phone, and it is capable of performing various functions beyond the traditional voice based cellular telephones. The major characteristics of the i-mode cellular phone according to NTT DoCoMo are that, with the i-mode phone, people can access online services including balance checking/fund transfers from bank accounts and retrieval of restaurant/town information. In addition to conventional voice communications, users can access a wide range of sites by simply pressing the i-mode key. The service lineup includes entertainment, mobile banking and ticket reservations. The i-mode employs packet data transmission (9600bps), so communications fees are charged by the amount of data transmitted/received rather than the amount of time online. The i-mode is compatible with Internet e-mail and can transfer mails between i-mode terminals. Packet transmission allows sending and receiving of e-mail at low cost. The i-mode, although dominant in the market, is not the only service. Other services from different service providers offer the cellular phones services with comparable features and capabilities. In this study, we analyze the effect of the introduction of this new second generation (2-G) cellular phones in Japan. Specifically, our research question is posited as follows: Does an explosive growth of the second generation cellular phones lead to stimulating the adoption of Internet access among the Japanese households, or suppress its adoption? Diffusion research in marketing has a rich literature.
Franchise Development Program: Progress, Opportunities and Challenges in the Development of Bumiputera Entrepreneurs in Malaysia
Issues related to the involvement of Bumipiteras in the development of the country, mainly in the business sector has received attention from the ruling government since the country’s independence. Before independence, the policy used by the English has caused Bumiputeras to be left far behind in many aspects, when compared to other races. In order to solve this problem, the government launched the New Economic Policy (NEP) which focuses to eliminate poverty and to reorganize the many races in Malaysia. The era of NEP is replaced by the National Development Policy (NDP) that aims to continue where NEP left of. Under NDP, the government designed programs that increased the numbers of Bumiputeras in the trading sector through the Bumiputera Community Trade and Industry Plan (BCTIP). In parallel to the outlined strategy in the resolution of the Third Bumiputera Economic Congress held in 1992, this paper will attempt to evaluate and analyze the achievements and opportunities in the franchise development program that is a vital mechanism which encourages Bumiputera involvement and contribution to the nation’s economy. This paper will also attempt to view the main challenges faced by Bumiptera entrepreneurs in the franchise development program. Issues relating the involvement of Bumiputera and country development started to gain the ruling government attention since independence was achieved. English policies before independence clearly left the Bumiputeras behind in many areas when compared to other races. Realizing the fact that national unity could only be achieved if the riches of the nation are equally shared among all races, the Bumiputera Economy Development agenda was given attention in the economic development plan of the nation. The involvement of the government in this area started since the first Prime Minister, Tengku Abdul Rahman and has continued until today. In realizing that the pattern of wealth distribution that is unequal will effect national unity, such in the May 13 Tragedy in 1969, the government designed the New Economic Plan (NEP) (1970-1990). This plan aims to eliminate poverty and restructure the community in Malaysia. Although NEP does not state the exact number of entrepreneurs to be produced, the public statement to see 30% national equity ownership is a step taken by the government to encourage active involvement of Bumiptera in the trade and industry sector. Unfortunately at the end of the NEP in 1990, the Bumiputera only managed to accumulate 20.1% of the nation’s wealth.
Developing a Computer Networking Degree:
Bridging the Gap Between Technology and Business Schools
Dr. Karen Coale Tracey, Central Connecticut State University, New Britain, Connecticut
The idea of integrating curriculum and collaboration between academic disciplines is not a new concept in higher education. Interdisciplinary learning, teaching, and curriculum came to the forefront as part of the progressive educational movement of the early twentieth century. Multidisciplinary and interdisciplinary programs can foster, accelerate, and sustain constructive change in academia and student learning (Ellis& Fouts, 2001). The purpose of the paper is to describe the proposal for the Bachelor of Science in Computer Networking Technology degree at Central Connecticut State University (CCSU). CCSU is a regional public university that serves primarily residents of Central Connecticut. It is one of four regional public universities offering higher education in the state. CCSU’s location in the center of the state means that the entire population of the state is within 75 miles of its location in New Britain. Connecticut is one of the smallest states in land area. Its land area of 4845 square miles makes it the third smallest state in terms of area (World Almanac, 2002). The greatest east-west distance in the state is approximately one hundred miles. The greatest north-south distance is approximately seventy-five miles. Connecticut’s population of approximately three million makes it the twenty-first smallest state in terms of population (U.S. Bureau of Census, 2000). Its population growth during the last decade (1991-2000) was 3.6 percent, which is noticeably less than the 13.1 percent growth in the U.S. CCSU is located approximately 2-3 hours from Boston and New York City. CCSU is divided into five academics schools: Arts/Sciences, Business, Professional Studies, Technology, and Graduate. CCSU enrolls approximately 12 thousand students. About two thousand of these students are enrolled in the Business School and 900 in the School of Technology. Most CCSU students (about three quarters) are undergraduate students (CCSU, 2002). Ninety-five percent are Connecticut residents. Twenty-two percent live on campus. Sixty-eight of the full-time students receive need-based financial aid (Morano, 2002). There is not an agreement on the meaning of multidisciplinary interdisciplinary programs, but Beggs (1999) provides a guide. He describes a discipline as a body of knowledge or branch of learning characterized by an accepted content and learning. Research, problem solving, or training that mingles disciplines but maintains their distinctiveness is multidisciplinary. Practically speaking, faculty from at least two disciplines who work together to create a learning environment and incorporate theory and concepts from their respective academic disciplines can be categorized as interdisciplinary. The creation of an international field course is a platform for students from different disciplines to interact.
It is now well established by academic scholars that property rights are a necessary requirement for the function of market-based economy. (Alchian & Demsetz, 1973, Drahos 1996) Over the last two centuries or so this principle has been extended to intellectual Property Rights (IPR) which include patents, copyrights, trademarks, brands etc..(Abbott Et Al, 1999) That importance is shown by the monetary and competitive gains generated by brand equity. However the definition and protection of intellectual property rights is also one of the most complex issues that is the subject of international negotiations because its acceptance has not always been universal. (May, 2000). Criticisms of the extension of IPR include, among other things, its impact on free trade and competition (Maskus2000, Maskus & Lahoual 2000). This paper argues that recent court cases and agreements like TRIPS (Gervais, 1998) may lead to the erosion of the fundamentals of property rights per se and by implications to the attributes of the market. Specifically it addresses the issues of competition and the rights of buyers and consumers. The focus of the paper is trademarks and brands, in particular it addresses the issue of gray marketing and the implications for global marketing management (Clarke & Owens, 2000) and innovation of science based products like pharmaceuticals. (Rozek & Rapp, 1992, Grubb, 1999) Finally, the paper will argue that it is far better for companies to use marketing tools, rather than courts, in order to protect their brand and trademarks equity.
Copyright 2000-2020. AABJ. All Rights Reserved