The Journal of American Academy of Business, Cambridge
Vol. 2 * Num.. 1 * March 2002
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
ISO9000: 2000 Quality Management Systems Standards: TQM Focus in the New Revision
Dr. C. P. Kartha, University of Michigan-Flint, MI
The international Standards Organization in 1987 developed a set of quality standards, the ISO9000 series, as a model for quality assurance standards in design, development, production, installation and service. The purpose behind the deployment of these standards was to simplify the global exchange of goods and services by developing a common set of quality standards. They provide a universal framework for quality assurance and quality management. The standards were revised in 1994. The revised version is referred to as ISO9000: 1994. A vast majority of organizations currently use this version. The most recent revision of these standards, ISO9000: 2000 was published in December 2000. The revision adopts a systems approach to quality management and includes TQM principles and procedures. This paper examines the major changes and improvements in the latest revision in relation to ISO9000: 1994 and some of the issues involved in the implementation of the new standards. Global competition and demand for better quality requirements by customers in recent years resulted in an emerging need for countries to develop guidelines and standards for identifying and addressing quality issues. The International Standards Organization in 1987 developed a set of quality standards known as ISO9000, subsequently updated and revised in 1994,as a model for quality assurance and quality management for organizations involved in design, development, production, installation and service. The purpose behind the deployment of ISO9000 was to simplify the international exchange of goods and services by requiring a common set of quality standards. The European Community nations adopted ISO9000 as the model for international standards for quality and required registration to these standards mandatory for doing business with other nations. In the mean time, the Big Three automobile manufacturers in the U.S. developed QS9000, an extended version of the ISO9000 standards, and required that all its suppliers obtain registration to these standards in order to qualify for new contracts as well as for renewing existing contracts. These developments generated immense interest in companies all over the world to actively begin the implementation of the various requirements in the standards to obtain certification. For most organizations, working towards certification was no longer an option but a necessity for survival in a competitive global market. The most recent revision of these standards, ISO9000: 2000, was published in December 2000. The revision was in response to widespread criticisms about several key aspects of the old standards. The new standards have a completely new structure based on the principles of Total Quality Management. They define the requirements for quality management systems that enable an organization’s capability to provide products that meet customer and applicable regulatory requirements. The new standards provide a process-oriented structure and a more logical sequence of the contents. They retain the essence of the 20 elements in the current standards and the requirements are consolidated into four main sections: Management Responsibility, Resource Management, Product and/or Service Realization and, Measurement, Analysis and Improvement. This paper consists of an evaluation of the added requirements specified in the revision ISO9000: 2000. The elements of the newest revision of the Standards are compared with the older standards and the significance as well as the effectiveness of the improvements in the new revision discussed. The ISO 9000 is a series of internationally accepted guidelines as to how companies should set up quality assurance systems. Focusing on procedures, controls, and documentation, the standards are designed to help a company identify mistakes, streamline its operations, and be able to guarantee a consistent level of quality. The standards prescribe documentation for all processes affecting quality. They are intended to be used for contractual situations between a customer and a supplier. The ISO 9000:1994 consists of five documents, ISO 9000 – 9004, and provide a series of three international standards – ISO 9001, 9002, and 9003 – dealing with quality systems that can be used for external quality assurance purposes. ISO 9000 provides guidelines for selection and use of the appropriate standard from the three. ISO 9004 is for internal use by organizations to develop their own quality systems. ISO 9001 deals with model for quality assurance in Design/Development, Production, Installation, and Servicing; ISO 9002 for only Production and Installation; and ISO 9003 deals with Final Inspection and Test. Companies currently registered conform to the requirements of one of these standards. The ISO 9000 Standards are used in ensuring a supplier’s conformance to specified requirements. The quality system requirements specified in these standards are considered complementary to technical product and service requirements. The ISO 9001 is more detailed and includes requirements for procedures for management responsibility, for contract review, and procedures to control and verify product design. In addition, requirements for document control, purchasing, process control, inspection and testing, quality records, quality audits, training and servicing are discussed. The document contains procedures for a total of 20 items. The standards require documenting conformance of quality systems to the company’s quality manual and established quality system requirements. The registration process involves a document review, pre-assessment to identify potential noncompliance and final assessment leading to registration. Periodic re-audits are required. Feedback from assessment includes a record of nonconformance with specific requirements of the standards. A list of specific quality system requirements of the standards is listed in Table 1.
Dr. Thang N. Nguyen, Candle Corporation and California State University Long Beach, CA
Anthropomorphic and ecological views of e-business
From an anthropomorphic view, we consider e-business over the Internet as a giant living human body, hence living species. From an ecological view, we extend the concept of business ecosystem by Moore (1993, 1996) to particularly include software as living species within business ecosystems. As depicted in Figure 1, we establish a parallelism between “e-business” driven by software (as living species) and non-software factors (e.g. funding – not shown in Figure 1) and “the natural ecology” conditioned by multiple living species and abiotic factors (such as temperature – not shown in Figure 1). Together, the two views give rise to the concept of e-business as an automation continuum that ranges from bits (microscopic) - to digital species - to business ecosystems (macroscopic). We argue that this parallelism particularly helps define a framework for the investigation of Business-IT integration, structurally (anatomically), functionally (physiologically) and behaviorally. The general parallelism in Figure 1 suggests a biologically/ecologically-inspired framework to exploit bio-ecological organization, interaction and behavior of digital species/digital organisms (denote software and software products respectively) and business ecosystems. In this paper, we will restrict our attention to such a framework for the investigation of business-IT integration. At the microscopic level of this parallelism, bits may be considered as particles, primitive data types as atoms and complex data types as molecules. Object classes in the sense of object-orientation can then be considered as biological equivalence of cells. Constructor methods in object class are considered equivalent to DNA/RNA and basic class methods to organelles in cells. As proteins are composed of about 20 different amino acids, we suggest that amino acids are the equivalence of programming constructs such as if-statement, for-statement, while-statement, do-while-statement or other constructs built upon them. We postulate that self-contained algorithms (or general class methods of any class) play a role similar to that of proteins or carbohydrates. Software components (such as COM/DCOM) then may be biologically equivalent to tissues, applications to organs, application systems to organ systems, software products to living organisms, and software to living species. At the macroscopic level, software product family are considered as population, e-business as community, and e-business ecosystems (in the sense of Moore, 1993) as natural ecosystems. Thus, when ions, atoms and molecules involve in reactions to produce proteins and carbohydrates for living cells to maintain the life of an organism, zeros and ones in binary system and different data types make up software elements and constructs that define life for software classes, components, applications and systems. When the cell is the smallest unit of biological life, the software class (in the sense of OO) is the smallest unit of software life. When biological membranes in or between cells are junctions between biological pieces, programming methods (algorithms) and interfaces can be considered as junctions between pieces of software. Assembler program inline macros, COBOL copybooks, C and C++ include statements, Java import statements, and C++ and Java class hierarchy and inheritance are examples of implementing the concept of reusability. The reusability of code creates new code similar to the mating of species generates different phenotypes from a given set of genotypes. Each block of codes reused in a new class or class method carries a certain characteristic (property) to it, similar to genotypes and phenotypes. Thus, the parallelism suggests that class inheritance and software reusability define the continuation of software life and its evolution in information systems supporting e-business. While in the ecological continuum (left-hand side of Figure 1), there is a natural linkage between elements at one level and the next, a similar linkage between high-level strategic business thinking and low level business-IT operations, however, is not obvious in the digital continuum (right-hand site of Figure 1). Section 3 below adds another insight into why the digital linkage does not readily exist despite the effort in the previous biologically- and ecologically-inspired research and applications. Applying biological (cellular level and below) concepts to software in particular and to computer science/engineering in general is not new (Langton, 1995). As a matter of fact, many researchers for many decades have been looking at the analogy between biology and computer science for investigation and research on cellular automata, artificial life and the like (Mitchell, 2000). At the computer program execution level, CPU time has been thought of as "energy" resources memory as a "material" resource representing informational genetic patterns that exploits CPU time for self-replication. Mutation of these patterns produces new forms as digital genotypes. Different genotypes compete for resources such as CPU time and memory space. Concepts such as genomes, parasites, and ecological communities have emerged and been used to study digital life. At the computer program development level, the concept of genetic algorithms has been the focus of research by many computer scientists and has initiated many applications in computer and information discipline. These studies and research have been strictly geared toward low-level applications of biological insights and understanding. Using an ecological approach to understand business and industry behavior has also been researched. Examples are financial-market prediction and population genetics as well as factory scheduling (Farmer, 2000, Mitchell, 1998). The most notable effort was that of James Moore (1993, 1996). In his award-winning article (More, 1993), Moore first looked at business competition as predators and preys. He raised such questions as, for example “How can a company like IBM create an entirely new business community such as PCs and then lose that market”. In response, he provided a framework for understanding competition and strategy development. According to Moore, the businesses evolve as ecosystems where a business ecosystem is defined as “an economic community supported by foundation of interacting organizations and individuals – the organisms of the business world”. The organisms can be anything, “a process, a department, business unit, or an entire company”. The business ecosystems, he said, would go through four phases in co-evolution: Pioneering, Expansion, Leadership and Renewal (Moore, 1996). Moore’s business ecosystem model, however, stays at the high-level of thinking for business strategy development. The two approaches have been pursued separately. There has been no obvious connection or linkage between them although each approach has achieved excellent results and practical applications. Today’s e-business, when restricted to integration, has two main issues: (1) (horizontally), it can be generally described as a collection of isolated, disconnected or fragmented information systems within an enterprise or across enterprises, and (2) (vertically), serious mismatches and gaps exist between the business part (e.g. high-level business strategies) and the supporting IT part (e.g. low-level IT operations).
A Look Over the Concepts of Work and Leisure Throughout Important Historical Periods
Dr. Esin Can Mutlu, Yildiz Technical University, Istanbul, Turkey
Ozen Asik, Yildiz Technical University, Istanbul, Turkey
The present study attempts at an in-depth investigation of work, and another related concept leisure. Work and leisure are defined as two counterparts, and their co-existence is examined throughout historical periods. Leisure in Ancient Greece is an ultimate activity, and consists of cultivation of the soul through the arts. Work is seen as a degrading activity until Protestantism rises and praises it as a way to reach the God. Especially after industrialization, it is seen that work gains superiority over leisure. That is, leisure is seen as complementary to work, and just as work, it is influenced by aspects of industrialization as well. Current aspects involved in work and leisure enhance the same meaning and existence, leading though to more leisure in terms of availability of time thanks to technology. In this paper, the concepts of work and leisure are defined and reviewed along important periods in history. Starting from the hunter-gatherer societies, the concepts are elaborated in the agrarian and industrial societies, followed by a conclusion regarding the current trends in work and leisure. Work as a general term is defined as an “exertion directed to produce or to accomplish something; labor, toil; productive or operative activity; activity undertaken in return for payment; employment, job.” Some criteria are cited in an article by Sylvia Shimmin, pertaining to the definition of work (Hopson & Hayes, 1968). These criteria suggest that: a. work is purposeful activity; b. work is instrumental; c. work yields income; d. work entails expenditure of effort; e. work involves some element of obligation and/or constraint. This last point may also be found in the work of Raymond Firth, who claims that “work is purposive, income-producing activity, entailing the expenditure of energy at some sacrifice of pleasure” (Bryant, 1972). These suggest that work is usually compulsory, and that its performance does not always give pleasure to the performer, and that another activity might be preferred to work. Leisure on the other hand is defined as an “opportunity or time afforded by freedom from immediate occupation or duty; free or unoccupied time; ease.” It is also considered as “free time after the practical necessities of life have been attended to.” These two definitions stress the time dimension involved in leisure. Another definition states that leisure is the exertion of a preferred activity that provides diversion and pleasure, instead of the everyday routine activities which are usually carried out in a sense of social constraint and obligation. In this definition the activity dimension of leisure is more apparent. Still, it is obvious that when defining leisure, our conception of it is generally bound to work. That is to say, work (in modern society) is considered as the primary activity and hence leisure is defined accordingly, with a secondary emphasis. Some social scientists tried to give comprehensive accounts of leisure by combining various elements involved. Kaplan determines essential elements of leisure as (1960; cited in Newlinger, 1981): a. an antithesis to “work” as an economic function, b. a pleasant expectation and recollection, c. a minimum of involuntary social role obligations, d. a psychological perception of freedom, e. a close relation to values of the culture, f. the inclusion of an entire range from inconsequence and insignificance to weightiness and importance, g. often, but not necessarily, an activity characterized by the element of play. Kaplan also claims that leisure is none of these by itself but all together in one emphasis or another. Still another leisure researcher Dumazedier points out that “leisure is not a category, but a style of behavior, which may occur in any activity” (1974; cited in Kando, 1980). This point, and the multitude of definitions described above direct our attention to the fact that whether an activity is defined as leisure or not is determined by the subjective meaning attached to it. The hunter-gatherer society has so far been the most ancient form of social organization. As all living beings have a struggle of keeping their existence, human beings grouped in hunter-gatherer societies also aim at surviving through the mastery over nature (Kando, 1980). Hunter-gatherer societies usually live in extreme natural conditions where resources are limited, and hence keeping up the existence is difficult. These conditions are so pressing that every member of the community is led to take part in the provision of a livelihood, usually in accordance with some kind of division of labor on the basis of age and/or sex (Neff; in Bryant, 1972). For example men are engaged in hunting, fishing, i.e. those activities requiring muscle power, and in making war weapons and various other materials to be used in all these activities. Women on the other hand gather natural products around, take care of the household and prepare the food. Activities for which specialized people (e.g. a religious leader or a war leader) are needed, are also carried out by men whose primary duty is again hunting or fishing. Since in hunter-gatherer societies work is continual and is demanded from every member of the society, it is considered as a natural and invisible part of routine life, indistinguishable as a separate sphere of behavior (Neff; in Bryant, 1972). Even the hunter-gatherers’ languages do not contain a distinctive term to describe work, which is also considered as a sign showing how work as a natural activity is incorporated in daily life. As a result, work-leisure distinction does not exist either (Kando, 1980). It is also claimed that in addition to the non-existence of a distinction between the two, the leisure activities are incorporated into work (Aydogan, 1999). An important note is that since work is such a natural part of daily life, it does not entail the negative feelings of being unpleasurable or unwanted, as in the modern world’s meanings attached to it. Thus, it can be said that work is experienced as leisure, or that work and leisure are the same in this context. Another factor that makes work and leisure closer concepts in hunter-gatherer societies is that work tends to be varied and creative, as people have to perform various related tasks altogether. Furthermore, work is usually accompanied by rituals, such as dancing, feasting, praying, etc. which are believed to be necessary for the success of the activity and thus become part of the world of work (Kraus, 1978). Several researchers examined “modern age” tribes who still lead the hunter-gatherer lifestyle in deserts, poles or tropical areas, in an attempt to gain insight about the early hunter-gatherer groups. Raymond Firth studied a tribal society in Tikopia (Bryant, 1972). Firth claims that work serves not only an economic function, but also a social function. The successful hunter has a right to praise oneself through songs and he is also honored by the community; his song is even used as a dance song. Various ceremonies (wedding, funeral, initiation, etc.) demand feasting and the men relatives of the family come together to prepare the food. Such work is done not for a material reward, but for the reversibility of the relationship. In short, work’s social side adds some element of play into its performance. Another researcher, Richard Gould notes that women are very fast in finding enough food for the family with great ease, in the relatively adverse environmental conditions (Gould, 1970; cited in Cheek Jr. & Burch Jr., 1976). Women can do embroidery, visit the other camps, in the time remaining from food gathering and household routines. Men’s hunting schedule is also irregular, so that the remaining time can be left to visits and dance (Lee & DeVore, 1968; cited in Cheek Jr. & Burch Jr., 1976). In short, in hunter-gatherer societies, work and leisure activities are quite intertwined, not separated by sharp distinctions, on the contrary, borrowing elements from each other. The agrarian society is a social organization in which people are settled in villages and mostly engaged in agricultural activities, including the breeding of domestic animals. This type of society first appeared in the Southwestern Asia, where the relatively favorable climate could have produced fertile lands on which agricultural populations could develop. Through this social organization the supply of food and raw materials could remain relatively stable and ample, a condition unknown to hunter-gatherer societies (Neff; in Bryant, 1972).
Dr. Lieh-Ching Chang, Shih Hsin University, Taiwan R.O.C
Tacit Knowledge Management in Organizations: A Move Towards Strategic Internal Communications Systems
Dr. Probir Roy, University of Missouri Kansas City, Kansas City, MO
Preeta M. Roy, The Wharton School, University of Pennsylvania, Philadelphia, PA
To date, knowledge management systems have focused on explicit knowledge. In this paper we explore the need for the incorporation of tacit knowledge into strategic communication systems. In order for these strategic internal communications systems to become effective, Internet Protocol (IP) Technology is a necessary ingredient. IP Technology, through the use of Virtual Private Networks and Streaming Media, will permit organizations to achieve the three key components of tacit knowledge based strategic internal communications systems, viz. discovery, dissemination and collaboration. Literature in Knowledge Management (KM) concurs that knowledge within an organization falls into two categories – explicit knowledge and tacit knowledge (Markus, 2001). Explicit knowledge is relatively easy to code and very external in nature. Thus, most organizations have concentrated their knowledge management efforts on developing effective links between the management of explicit knowledge and external communications systems. Tacit knowledge, on the other hand, is relatively harder to code and extract, and is very internal in nature. Not only does tacit knowledge need to be discovered, extracted, and captured, it has to be creatively disseminated so that this shared knowledge can be efficiently used to extend the knowledge management base. Perhaps, tacit knowledge is the more important component of knowledge management, in so far as the collaboration that it encourages leads to quantum shifts in knowledge rather than the incremental linear enhancements that are typically associated with explicit knowledge management. Prima facie, it appears that tacit knowledge extraction, dissemination, and collaboration would be difficult to effect. However, with the tremendous developments in communications technologies, especially Internet Protocol (IP) technologies, there is a technological push that is leading to rapid advances in strategic internal communications systems that harness tacit knowledge. Strategic internal communications systems are intended for use within an organization, with a very specific target audience, usually employees. These systems contain very specific messages, but allow for wide-ranging and multi-faceted forms of collaboration. These internal systems primarily use IP-based broadband technology, which allow organizations to free their knowledge management systems from time, space, culture, and location constraints, while still retaining the feeling of “engagement” that is truly essential for the fostering and nurturing of the key collaboration component of tacit knowledge management. In this paper, we attempt to assess the ability to harness tacit knowledge with Internet Technology. We elaborate on the move towards strategic internal communications systems and the technology that is driving this movement. This paper also discusses how these systems will provide an efficient way for organizations to extract, assimilate, and disseminate tacit knowledge, thus further enhancing their knowledge management systems. Knowledge Management (KM) is the formal process of determining what internally held information could be used to benefit a company and ensuring that this information is easily made available to those who need it. Companies have attempted to grasp the concept of KM and apply it to organizational functions. Primarily, companies have strongly developed the link between KM systems and external communications. This link between KM and external communications has been two-fold. On the one hand, external communications serve to provide the company’s public (their clients, their partners, and the media and government) with background information; promote goodwill towards the company; and raise awareness of a company's direction, products and/or services. On the other hand, the company acquires knowledge about its public (i.e. market trends, costs of distribution, early warning about receivables and demand of inventory). Knowledge management through external communications has been executed using direct marketing, electronic marketing, and customer relationship management (CRM) applications. An incredible amount of attention has been given to external communications. Choosing the actual message, deciding how best to address the particular audience and creating an aesthetic product are considered crucial tasks that will affect the final outcome of a piece of external communication. Not only disseminating knowledge to the public but also capturing knowledge from the public are capabilities that companies are either experts at, or, it is something that companies acquire through outside firms. However, another type of communications is as, if not more, important to KM in a company: internal communications. Internal communication is intended for use within a company. The target audience is very specific, usually employees, and contains very specific messages to introduce a new company strategy, or announce new policy announcements. One of the most important is training (product or procedure training or management training). Training has become so important to a company's vitality that most successful companies are functioning as virtual universities using internal resources to disseminate knowledge to those who request the training. As with external communications, internal communications also has a two-fold link with KM. Efficient internal communications can either be used to remove barriers that prevent knowledge sharing, or internal communications can be utilized to capture the tacit knowledge incorporated in employees.
Dividend Omission Announcement Effects, the Firm’s Information Environment, Earnings Volatility and Q
Dr. Devashis Mitra, University of New Brunswick, Canada
Dr. Muhammad Rashid, University of New Brunswick, Canada
For a sample of dividend omitting firms’ stocks, the average returns variance increases significantly on day -3 (where the Wall Street Journal Index announcement date is day 0) relative to a prior estimation period, price spreads show significantly increased levels on days -1 and 0, and average Cumulative Abnormal Returns (CAR) are consistently negative between days -4 and 0 and significantly negative on day -1. These results suggest some anticipatory market uncertainty during the period immediately before the dividend omission announcement. The percentage increase in average returns variance for days -3 and -2 relative to the estimation period average is inversely associated with the firm’s approximate q measure and the dividend change yield from the previous quarter and positively associated with the percentage of institutional equity holding in the firm as well as with firm-specific earnings volatility, measured by the standard deviation of earnings per share. On average, the returns variance, percentage spread and earnings volatility increase over a 365 day post-announcement period relative to 90 day pre-announcement levels. These results suggest heightened uncertainty for the aftermath of dividend omission. Also, on average, the number of institutions holding the firm's equity decline substantially one year after the announcement. This suggest less monitoring of these firms and enhanced informational asymmetry in the aftermath of the dividend omission announcement. The market’s risk perception subsequent to the dividend omission appears to increase more for firms with high historical earnings volatility and lower approximate q values. This study seeks to add to the empirical literature on the “information content“ of dividends by examining the effect of first time dividend omission announcements for a sample of NYSE-listed firms on their stock price characteristics. We find that the average returns variance and percentage high-low spread of stock prices increase during an announcement or event period relative to an estimation period. Analysis over a seven-day “event period” shows significantly increased returns variance levels on day -3 and significantly increased average daily price spreads on days -1 and 0, relative to the Wall Street Journal Index announcement date 0. The average Cumulative Abnormal Returns (CAR) are consistently negative between days -4 and 0 and significantly negative on day -1. The news of the dividend omission is, often, announced on day -1 and reported the next day. Some of these announcements may be made during trading hours while others may be transmitted after trading hours. In this study, the higher average returns variance, price spread and negative CAR, described above, are interpreted to reflect, on average, the market's uncertainty caused by anticipating the omission announcement before its actual release. Indication of an increase in anticipatory uncertainty raises related research questions. First, the study investigates whether the increase in uncertainty bears an association with the firm's information environment proxied by its percentage of institutional equity holding, the extent of volatility in the firm's earnings and dividend change yield from the quarter prior to the dividend omission announcement. Bhushan  defines the firm's information environment as the extent of firm-specific information available in the public domain. Firms with higher institutional equity holding would presumably be subject to more monitoring and public scrutiny (see, for instance, Bhushan (1989)). For such firms, the dividend omission announcement may be less unexpected and may, therefore, cause less market uncertainty. Similarly, the quality of information available about firms with a history of volatile earnings will be poor. For such firms, the dividend omission announcement may be more unexpected and may cause greater market uncertainty. According to Christie (1994), a firm with a recent history of dividend reductions will create some market expectations of a dividend omission. In this study, the efficacy of the previous quarter’s dividend change yield as a proxy variable for market expectations regarding the impending dividend omission announcement is also examined. The study also seeks to explore the implication of the dividend omission announcement on the “q” characteristics of our sampled firms (high “q” firms connote higher growth opportunity, whereas lower “q” firms are perceived to have less growth prospects). We use a proxy variable for Tobin’s q following on Chung and Pruitt  who provide a simple, easy-to-use approximation of Tobin's q. Our proxy variable for Tobin’s q is labelled “approximate q”. The above discussion provides the rationale for investigating whether the market’s anticipatory uncertainty preceding the dividend omission announcement (as evidenced by returns variance behaviour) is systematically associated with the firm’s information environment and its “q” characteristics. The study, therefore, examines whether a systematic association exists between such returns variance behaviour and the firm's percentage of institutional equity holding, earnings volatility, dividend change yield and its approximate q, in a multivariate context, where the effects of the behaviour of the firm’s share price and trading volume are also considered. Dividend signalling theories suggest that if the introduction of a dividend stream provides new information about the firm then the removal of the dividend stream (or dividend omission) should increase informational asymmetry (see, for instance, Ghosh and Woolridge (1988), Sant and Cowan (1994) and Christie (1994)). The study, therefore, compares the average pre-announcement (or pre-event) period returns variance with an estimate of normal returns variance subsequent to the dividend omission announcement, which is categorized as average post-announcement (or post-event) period returns variance. In a similar manner, pre-announcement period averages for price spread, trading volume and closing share prices are compared with post-announcement period levels. Enhanced uncertainty will be reflected in significantly higher average returns variance and spread levels in the post-event period relative to the pre-event period. However, averages for trading volume and closing share prices are likely to be lower in the post-event period than in the pre-event period. Related objectives are to examine whether the extent of increase in uncertainty is associated with the magnitude of the firm's dividend change yield, its percentage of institutional equity holding, earnings variability and approximate q.
Dr. K. Shelette Stewart, Nova. Southeastern University, Ft. Lauderdale, FL
This study examined the extent to which small businesses, with an international focus, are employing formal business planning techniques and the extent to which such techniques contribute to small business success. The study was based on the hypothesis that small business success is associated with formal business planning. Indicators of both formal business planning, the independent variable, and small business success, the dependent variable, were developed. Survey research was conducted to generate and analyze data gathered from 100 owners/operators of small businesses, with an international focus, located within the Atlanta Metropolitan Statistical Area (MSA). A five-page questionnaire was developed and a survey analysis grid was designed. The researcher found that those businesses practicing formal business planning techniques were more successful than those not employing them. The conclusions drawn from these findings suggest that formal business planning contributes to the success of small businesses with an international focus. Innovation through entrepreneurship and small business development has proven to be the foundation upon which the pillars of American economic growth stand. A myriad of important innovations may be traced back to small business owners and operators. Small businesses constitute a critical component of the United States economy. They provide approximately 80 percent of all new jobs, employ 53 percent of private sector employees and, represent over 99 percent of all employers. Given the prevalence of the Internet, many small businesses are expanding their markets and becoming more international in scope. However, according to the U.S. Small Business Administration, approximately half of all new small businesses fail within the first five years of operation. The agency reports that over half a million small businesses closed and/or filed for bankruptcy during the year 2000. A common adage suggests that individuals do not plan to fail; they simply fail to plan. This may also be aptly applied to the small business arena as numerous challenges, such as limited resources, financial instability, unplanned expansion, inadequate management, and competition, have been identified as contributing factors to small business failures or closures. Nevertheless, most of these issues may be effectively addressed by one critical endeavor: formal business planning. Most of the literature pertaining to the topic of small business planning is generally more prescriptive than descriptive. There are numerous practical, “how to” books targeting small business owners and operators. These manuals are consistent in their emphasis on the importance of developing and implementing a formal business plan. There is also a plethora of consultants, seminars, conferences, software programs, videos and audiotapes available to guide small business owners and operators through the process of formal business planning. Over the past few years, a number of notable studies (Orpen 1985; Pearce 1987; Hillidge 1990; Schwenk and Shrader 1993; O’Gorman and Doran 1999) have emerged exploring the relationship between planning and small business performance. Nevertheless, there remains a void in research and literature available on the subject of the association between formal business planning and small business success. The primary purpose of this study is to contribute to the market of literature pertaining to the association between formal business planning and small business success, particularly relative to firms having an international focus. Specifically, the study explored: 1.The extent to which small businesses, with an international interest, conduct formal business planning. 2.The degree to which formal business planning contributes to the success of small businesses with an international focus. Survey research constituted the research methodology for this study. The units of analysis consisted of 100 small businesses, with an international focus, within the Atlanta Metropolitan Statistical Area (MSA) of the state of Georgia. It was a requirement that the businesses in the sample have an international focus and at least one location in Atlanta. For purposes of the study, a “small business” is defined as a commercial enterprise employing fewer than 500 employees. Neither governmental agencies, nor not-for-profit organizations, were included in the sample. Respondents consisted of small business owners and operators. Owners are defined as individuals who actually own the business. Operators are defined as persons who hold key operational or senior management positions, within the business, and support the principal(s) of the firm in business development. President, Chief Executive Officer (CEO), Director, and Principal are common professional titles for individuals in ownership positions. Vice President, Chief Financial Officer (CFO), and Assistant Director are examples of acceptable titles for individuals who hold supporting positions to the principals and, thus, serve as operators as opposed to owners. Formal business planning served as the independent variable while small business development/success was the dependent variable. Formal business planning is defined as the development, implementation, and continued update of a documented business plan tailored for a specific business. Seven business planning elements were utilized as indicators of formal business planning. They included the incorporation of : (1) a written plan, (2) corporate mission statement, (3) organizational plan, (4) marketing plan, (5) financial plan, (6) operational plan, and (7) monthly business planning meetings. All of the business planning indicators were weighted equally. Small business success is defined by the extent to which the firm exhibits a number of indicators of business growth and success. A total of five indicators were incorporated to measure small business success: (1) increasing staff, (2) expanding clientele, (3) international growth, (4) establishing new sites and, (5) acquiring businesses. All of the small business success indicators were weighted equally. The owners and operators, that were surveyed, represented a myriad of industries including construction, education, finance, insurance, real estate, healthcare, manufacturing, mining, professional services, retail/wholesale trade, transportation, communication, and utilities. The international firms posted annual sales/revenue between a range of $25,000 to over $1 million. The Atlanta Chamber of Commerce served as the primary resource center for locating these businesses as approximately 80 percent of their membership consists of small business owners/operators. The data collection procedure for this project was twofold consisting of both telephone surveys and one-on-one interviews. The same questionnaire was utilized for both methods. International Atlanta, an annual directory published by the Atlanta Chamber of Commerce, which lists companies that have an international interest, was utilized as a sampling frame for research purposes. This directory features companies of different sizes and of varying industries.
Dr. Abdulla M. Alhemoud, University of Qatar, Qatar
Dr. Tamama H. Abdullah, Ministry of Public Works, Kuwait
Cultural Understanding and Consumer Behavior: A Case Study of Southern American Perception of Indian Food
Raymond Bailey, Erskine College, SC
Dr. Robert Guang Tian, Erskine College, SC
Cohesion Among Culturally Heterogeneous Groups
Dr. Norman Wright, Brigham Young University-Hawaii, Laie, HI
Glyn Drewery, Brigham Young University-Hawaii, Laie, HI
This paper examines the role of culture in explaining differences in self-reported evaluations of team cohesiveness in culturally diverse teams. In earlier research, Thomas et al. (1994) found that teams composed of culturally diverse members experienced less cohesiveness than did those in culturally homogenous teams. Such findings make sense in light of similarity theory, which suggests that humans feel a greater attraction to those who are most similar to themselves (Nahemow and Lawton, 1983). One might also profitably compare the cohesion of teams from various cultures. In this study, however, the authors examine the role of nationality on perceptions of cohesion within a mixed-culture team framework. Hypotheses are formed based on the conflict resolution style of each culture represented. The results indicate a small but significant relationship between the nationality of the respondents and the degree of cohesion attributed to their mixed culture teams. Asians report the least cohesion followed by Anglos while Polynesians indicate the highest levels. Increasingly business activities involve team members from multiple nationalities and cultures. While alliances between culturally diverse firms often make strategic sense, managers frequently underestimate the challenge of combining employees with different attitudes, beliefs, and work values. As an executive of a large European firm lamented, “we have had strategic plans suffer and careers derail because of complications arising from multinational groups” (Hambrick et al, 1998). In an effort to better understand these challenges, this paper examines the role of national culture in explaining differences in self-reported evaluations of team cohesiveness in culturally diverse teams. Throughout years of study, cohesion has arguably been the most important outcome variable among small groups (Carron and Brawley, 2000). Staw et al. (1981) defined cohesion as attraction to members in one's group. It can further be defined as a collectivist type of togetherness that exists between team members when team needs transcend individual differences and desires. Cohesiveness arises in groups for two reasons (Tziner, 1982). First, socio-emotional cohesion arises because team members enjoy one another's company. Group members feel a sense of togetherness based upon an appeal to emotions. Most discussions of cohesiveness have been confined to this type alone. However, instrumental cohesiveness presents another type of cohesion. This is the element of cohesiveness that arises when group members believe they cannot achieve the goal of the group alone. It must be a collective effort sustained by each individual's procurement of the group goal. As a result, group members feel a strong calculative affinity toward other group members. Regardless of the source of cohesion, having it within a team results in several positive outcomes. First, research indicates that when a group reaches for a high performance goal, cohesive groups generally seem to out perform non-cohesive groups. A similar outcome was found in teams of freshman psychology students regardless of goal level (Hoogstraten and Vorst, 1978). Other studies have found that cohesive groups perform at least as well as their non-cohesive counterparts (McGrath, 1984; Shaw 1971). In addition to positive task outcomes, cohesiveness influences many other aspects of team performance. Cohesive groups are associated with greater job and personal satisfaction, increased effectiveness, greater communication among group members and lower absenteeism (Stogdill, 1972). Further, Weinberg et al. (1981) reveal that lack of cohesiveness was the number one problem in teams with interaction problems (Weinberg et al, 1981). In order to obtain the benefits of cohesion, it is important to understand how cohesion arises within teams. One of the ways in which this occurs is through communication. As group members communicate openly, they make a commitment toward group goals and operate in a flexible and motivating manner, which results in greater cohesion (Pearce and Ravlin 1987). In addition, to open communication, Harrison et al (1998) reported that interaction between group members led to a higher level of cohesiveness. As people were engaged in conversation and interacted more frequently they began to become more attracted to each other, finding common interests and developing a deeper level of emotional attachment. This interaction serves to overcome the inherent tendency to focus on surface level characteristics of team members like race/ethnicity, sex, age, or other overt biological characteristics typically reflected in physical features, switching that focus to attitudinal differences and similarities. Harrison et al (1981) found that, in teams with gender diversity, spending more time together resulted in greater cohesion. However, time may not necessarily be a positive factor in the development of a cohesive unit as reported by Hall et al (1998). This occurs when individuals are unable to see beyond their cognitive structure based on initial perceived dissimilarities. Even when there is a high level of mutual attraction, underlying attributes that are not necessarily overcome by time affect a group's ability to develop cohesively. The importance of group interaction in determining cohesiveness is also highlighted through the effect of group size. As group size increases, the opportunity to interact with members becomes more difficult (Summers et al, 1998). Members have to relate to a person according to the surface level diversities (Harrison et al, 1998). The ability to communicate common values and goals becomes more difficult to perform and, often, smaller groups and cliques can be formed within the main body. This creation of coalitions within groups can diminish cohesion even further. Many early studies of group cohesion attempted to identify the effects of specific group size. Fisher (1953) reported that as group size among college student work groups increased, intimacy and cohesion decreased. Seashore (1954) and Kinsey (1950) both reported similar findings in their respective studies. Though the studies performed could not necessarily agree on an optimum group size for the most effective group cohesion, Orpen (1986) reported that the most effective work groups were made up of 5 to 10 members.
A New Method for Teaching the Time Value of Money
Dr. Terrance Jalbert, University of Hawaii at Hilo, Hawaii
Students frequently experience difficulty in identifying the appropriate time value of money (TVM) technique to apply to a problem. This paper surveys the TVM presentation in seven popular introductory finance textbooks. A new presentation technique is then developed. The presentation technique is based on a simple method for identifying the appropriate TVM technique to apply to any problem. TVM techniques conducive to applying the calculations in a generalized setting are then presented. Visual aids are provided to assist students in selecting correct techniques. By using these techniques students are able to more easily identify appropriate TVM techniques. Many techniques have been developed for presenting the time value of money (TVM). Despite this considerable effort on the part of instructors, students frequently experience difficulty identifying the appropriate technique to apply to a specific problem (Eddy and Swanson, 1996). However, it is well known that a pedagogy, which works well with one audience, does not necessarily work well with another (Bloom, 1956). Thus, the development of new and different techniques that appeal to various audiences is beneficial. This paper develops a new technique for teaching the TVM. The technique is specifically intended to appeal to students that benefit from precise definitions and visual aids. The technique affords instructors a new tool in their arsenal to teach students TVM concepts. The paper begins by surveying how seven popular introductory finance textbooks address the TVM issue. Next, a new approach for teaching the TVM is presented. The approach begins with a simple method for distinguishing between a single sum of money, annuity, perpetuity, growing perpetuity and uneven cash flow stream. Cash flows are distinguished by examining conditions that must be met in order for a series of cash flows to qualify for each classification. TVM techniques conducive to applying the calculations in a generalized setting are then presented. Finally visual aids are provided to walk students through selecting the appropriate TVM technique for a problem. Students nearly unanimously experience difficulty in identifying the appropriate technique to apply to TVM problems. While the TVM issue is complex, some of the difficulty can be attributed to the approach that finance texts take to the issue. This contention is confirmed by Eddy and Swanson who argue that instructors do not sufficiently develop a frame of reference which begins with simple learning objectives focused on individual topics and progresses to higher levels of understanding (Eddy and Swanson, 1996). This section contains a survey of the approaches used in seven popular finance texts to present TVM concepts. The seven books examined in this study are: Stanley Block and Geoffrey Hirt (BH) eighth edition of Foundations of Financial Management, 2) Charles Moyer, James McGuigan and William Kretlow (MMK) eighth edition of Contemporary Financial Management, 3) Zvi Bodie and Robert Merton (BM), first edition of Finance 4) Gary Emery (EM), first edition of Corporate Finance Principles and Practice 5) Arthur Keown, David Scott, John Martin, and William Petty (KSMP), seventh edition of Basic Financial Management, 6) William R. Lasher (LA), second edition of Practical Financial Management, and 7) Eugene Brigham, Louis Gapenski and Michael Ehrhardt (BGE), ninth edition of Financial Management Theory and Practice. These books are believed to be a representative cross section of texts used in introductory finance courses. The survey revealed three common areas of concern regarding TVM presentations. The first concern is the balance authors must make between overly detailed explanations and explanations that are overly simplistic. Regardless of the level of detail provided, basic elements of the concept must be incorporated to ensure a complete understanding. Failure to include the basic elements leads to a presentation that is imprecise and confusing. In each of the finance texts surveyed, a definition for an annuity is provided. EM (p. 100) defines an annuity to be a multipayment problem with equal periodic cash flows. BH (p. 235) define an annuity to be a series of consecutive payments or receipts of equal amount. MMK (p. 137) define an annuity to be the receipt or payment of equal cash flows per period for a specified amount of time. KSMP (p. 178) define an annuity to be a series of equal dollar payments for a specified number of years. LA (p.142) defines an annuity to be a stream of equal payments, made or received, separated by equal intervals of time. BM (p. 100) define an annuity to be a level stream of cash flows. BGE (p. 250) define an annuity to be a series of equal payments made at fixed intervals for a specified number of periods. These definitions vary widely in their precision. Definition variations are also found with regard to the discussion of uneven cash flow streams. EM, BH, KSMP and BM, do not address the issue of uneven cash flow streams in their TVM chapters. MMK (p. 147), LA (p. 168), and BGE (258) introduce uneven cash flow streams as a means of resolving the TVM problem that occurs because of unequal payment amounts. These definitions are generally imprecise and leave open questions that the students must infer from example problems. More precise definitions should help students more easily identify situations where the technique is appropriate. The second concern that the survey revealed is incomplete explanations of how TVM techniques can be utilized. Specifically, the surveyed texts do not sufficiently address the issue of the present and future values of annuities due and ordinary annuities. Each of the books surveyed addresses the issue by noting that an ordinary annuity involved receiving the payments at the end of each year and an annuity due involves receiving the payments at the beginning of each year. While the statements made are technically correct, they lack generality. Students are frequently unable to generalize the techniques to advanced applications such as deferred annuities. The third concern is the introduction of TVM techniques in multiple chapters throughout the book. Spreading the presentation of TVM techniques across multiple chapters increases the difficulty of integrating the material into a broad understanding. This problem is common in the presentation of growing perpetuities.
Taking Note of the New Gender Earnings Gap: A Study of the 1990s Economic Expansion in the U.S. Labor Market
This article examines the impact of economic expansion on the gender earnings gap in the U.S. labor market during the 1990s. Using data from 1994 to 2001 Current Population Survey, this research employs the Blinder-Oaxaca decomposition method extended by Cotton along with the correction for selectivity bias. The results show that the gender earnings gap has widened from 1994 to 2001. The pattern of gender earnings gap described by the results of the decomposition analysis, overall and across three broadly defined occupational categories, is extremely consistent, indicating that women were adversely affected during the economic expansion in the 1990s. The result of a slightly widened gender earnings gap casts doubts on the widely held optimistic expectation of narrowing of the gap developed over the past several decades. In the future, labor policy should focus on changing labor market structure so that females will be treated equally with males to narrow the gender earnings gap. During the past several decades, considerable attention in the academic arena has been focused on the analysis of women’s labor market position. Earnings are not only a major determinant of worker’s economic welfare, but also are a significant factor in a multitude of decisions, ranging from labor supply to marriage and even to fertility (Blau and Kahn, 1999). For about 20 years after World War II, the ratio of women’s to men’s earnings remained at approximately 60 percent. Since 1976, however, the gender gap in annual earnings on average declined by about 1 percent per year (O’Neill and Polachek, 1993). Also, during the years of 1978 to 1999, the weekly earnings of women full-time workers increased from 61 percent to 76.5 percent of men’s earnings. However, the narrowing earnings gap failed to decline further after the mid-1990s (Blau and Kahn, 2000), pushing researchers to scramble for possible explanations. Despite the intense scrutiny of the gender earnings gap, only a handful of researchers have attempted to examine its trends, especially in recent years. This reason fills in this research gap by employing Current Population Survey (CPS) to estimate gender earnings gap from 1994 to 2001. Since the last recession in 1991, the most visible effects of economic boom include a dramatic decrease in unemployment and a notable increase in jobs during most of 1990s. While growth remained positive, the rate of expansion was the weakest in the spring 2001, a barely discernible 0.7 percent growth rate since a 0.1 percent rate of decline in the first quarter of 1993 as the U.S. economy was struggling to emerge from recession. Therefore, the use of 8 years of data from 1994 to 2001 allows us to examine a substantial time. While the Blinder (1973)-Oaxaca (1973) decomposition method was used for the decomposition of the earnings gap in most previous research, this research utilizes the method developed by Cotton (1988) that modified the Blinder-Oaxaca method. The implicit assumption of Blinder-Oaxaca decomposition method that male worker’s earnings structure would prevail in the absence of discrimination is not theoretically supported. Cotton observed that using the male’s or female’s estimated coefficients of earnings equations as nondiscriminatory earnings structure is flawed, and further suggested the decomposition technique that is based on the weighted average of the coefficients for the two groups. Heckman’s (1979) method for correcting sample selectivity bias was used to improve the methodological underpinnings of the analysis. The data used in this research are from the CPS Annual Demographic Survey (March Supplements) from 1994 to 2001 that provides a variety of information on the U.S. labor force. These data capture the changes in women’s economic position and provide up-to-date information on the gender earnings gap.1 This analysis uses only full-time workers between 18 to 65 years old who were born in the United States. The restriction to full-time worker is used to control for the differences in working hours of male and female workers. Women tend to work fewer weeks per year and hours per week than men, thereby giving rise to misleading results when male and female earnings are compared without being controlled for differences in the part-time/full-time status. The focus on full-time workers results in a more homogenous sample so that the computation of the gender gap is not affected by any hourly wage penalty for part time work. The age restriction is introduced to exclude the aged, most of whom have retired from the labor market, and the very young, who are still in school and/or are in the early stages of on-the-job training. Since the socioeconomic process that leads to the gender earnings gap of the foreign-born population is known to be different from that of the natives, this research is restricted to the native-born population.2 where j = m, f indicates male and female respectively. where and are, respectively, the density and distribution function for a standard normal variable. is a vector of worker characteristics that affect earnings, and Z is a vector of worker characteristics that determine whether an individual will be in the workforce.3 The dependent variable in the earnings equation is the log of the total annual earnings from wage and salary. The variables included in the vector of worker characteristics are grouped into four broad groups: demographic variables (age, sex, race, marital status, number of children), human capital variables (education, experience4), geographic variables (metropolitan residence, southern residence), and others (union membership, having a white collar occupations). The four levels of education that were identified were Edu2 (High School graduate), Edu3 (13 – 15 years of schooling), and Edu4 (college graduates).
Zen of Learning: Folkways Through Wisdom Traditions
Dr. Satinder K. Dhiman, Woodbury University, Burbank, CA
This paper discusses ten folk laws of learning. These laws are in the nature of musings about learning. The purpose of each law is to clear some psychological barrier to learning. These laws invite the learner to examine his or her assumptions and expectations about learning. These laws tell us that personal likes or dislikes may make for comfort but not for learning. No true learning can take place unless the learner is willing to undergo a shift of mind and to challenge his or her engrained habits of thought. These laws also bring out the importance of patience, humility, and sharing in the context of learning. To underscore the message, this paper draws heavily on quotes, anecdotes, and stories culled from the wisdom traditions of Taoism, Sufism, and Zen. This author has used these laws in his management-related classes, both at the undergraduate and graduate level. This methodology has helped this writer in orienting the students in the art and science of learning. Besides, it clarifies several misconceptions about leaning during the early stages of the course. It is also in harmony with the growing literature on the concept of “Learning Organization” inspired by such management authors as Peter Senge and Max Depree. The following folk principles are in the nature of musings about learning. It is not the intention of this writer to present another "theory" of learning. No originality is intended or implied other than the presentation and rearrangement of the material. Most of these insights are based on author's long-time study of the wisdom traditions of Sufism and Zen. Several anecdotes and stories have been used to illustrate the underlying theme. To facilitate better comprehension and assimilation of information, this writer has occasionally used appropriate teaching stories during class discussion. It is indicated to the students that these stories are not ends in themselves but means to an end, the end being better understanding of the material presented. In addition, these stories, owing to their symbolic value, serve as ideal developmental tools of learning. If eighty percent of the job is showing up, to quote Woody Allen, then it seems that the rest twenty percent depends upon paying attention. And stories help tremendously to tease greater attention span out of the listener. This author has followed the practice of familiarizing the students about these learning principles in the beginning part of a management or an organization behavior class. They have proved very useful in creating an atmosphere of openness and garner willingness to learn, both on the part of the teacher and the students. To a punctilious reader, some of these principles may strike as “obvious.” But, then, the ability to recognize the obvious is a part of learning to learn. It has been said that learning involves the ability to unlearn on an ongoing basis. Learning to unlearn is, indeed, the prerequisite to all true learning. As a matter of fact, true learning can more aptly be described as an exercise in unlearning, the emptying of mind of mistaken assumptions and engrained habits. Nothing can be put into a full pot: “If you want to fill a container,” goes an Eastern saying, “you may first have to empty it.” Similarly, an English proverb says that you cannot make an omelet without breaking eggs. Picasso was referring to this act of “emptying” when he said, “Every act of creation is first an act of destruction.” It may be pointed out that “to make oneself empty” does not mean something negative, but refers to the willingness and openness to receive (Suzuki, Fromm, & De Martino, 1960). We see the operation of this principle throughout mother nature. Think of a tree that insists on keeping its old leaves when Spring comes or of a seed that does not want its outer shell to break when it is ready to sprout. According to a Biblical verse, "No one patches new cloth onto an old garment; No one pours new wine into an old wineskin."(Crossan, 1994, p. 85; See, Matthew 9:16-17) This is related to the first principle. Learning how to learn means examining one’s assumptions and testing one’s long-cherished beliefs. We can’t learn entirely by our own assumptions, by our own likes and dislikes. As one Zen master (Cleary, 1993, p. 82) has said, "It is hard for people to see anything wrong with what they like, or to see anything good with what they dislike." More often than not, the relevance of a thing is inversely related to its attractiveness. No true learning can take place unless the learner is prepared to undergo a shift of mind required to perceive the message presented. In Zen this shift is referred to as "taking off the blinders and unloading the saddlebags." To quote Marcel Proust, “The real voyage of discovery consists not in seeking new lands but in seeing with new eyes.” Shah (1978) has suggested, "Study the assumptions behind your actions. Then study the assumptions behind your assumptions." (p. 91) Commenting on The Structure of Scientific Revolutions, Thomas Kuhn (1970, p. 166), who brought the word paradigm in vogue, has remarked that even “scientific training is not well designed to produce the man who will easily discover a fresh approach.” Hagen (1995, p. 21) narrates the following incident from the early career of Einstein that illustrates the dynamics of invetrate paradigms: When Albert Einstein published his general theory of relativity in 1915, he didn't believe that the universe was expanding, even though certain parts of his theory suggested that it was. He thought that it was just some strange quirk in the math which implied an expanding universe, so by inserting a special term he was able to get rid of the troublesome part that predicted the expansion, which he regarded as an absurdity. Then, in the 1920s, astronomers discovered that the distant galaxies were receding rapidly from the earth….This was an observed fact-- physical evidence that implied an expanding universe. Einstein later referred to his alteration of the portion of his great theory which would have predicted such an expansion as the biggest blunder of his career. While paradigms serve important functions (of assimilation and preservation of knowledge) yet they tend to assume rigidity which prevents further learning and growth. One should nurture paradigm pliancy and the willingness to look for truth at unexpected places. A turtle makes progress only when it sticks its head out. It is a common experience that most people assume themselves as exceptions when presented with the need to learn.
Computer Crimes: How can You Protect Your Computerised Accounting Information System?
Dr. Ahmad A. Abu-Musa, KFUPM, Saudi Arabia
Computer crime is almost inevitable in any organization unless adequate protections are put in place. Computer crime has no longer become a local problem and security solutions cannot be viewed only from a national perspective, they have expanded from relatively limited geographical boundaries to become worldwide issues. Therefore, protecting computerised accounting information systems (CAIS) against prospective security threats has become a very important issue. The main objectives of this paper are to investigate the significant security threats challenging the CAIS in the Egyptian banking industry and the prospective security controls that are actually implemented to prevent and detect security breaches. A self-administered questionnaire was used to survey the opinions of the heads of internal audit Departments (HoIAD) and the heads of computer departments (HoCD) in the Egyptian banking industry regarding the following CAIS security issues in their banks: The characteristics of CAIS in the Egyptian banking industry; The significant perceived security threats to the CAIS in the Egyptian banking industry; The prospective security controls implemented to eliminate or reduce security threats in the Egyptian banking industry. The entire population (sixty-six banks’ headquarters) of the Egyptian banking industry has been surveyed in this research. Seventy-nine completed and usable questionnaires were collected from forty-six different banks’ headquarters. Forty- six of these questionnaires were completed by the heads of computer departments, and thirty-three questionnaires were filled by the heads of internal audit departments. The response rate of the computers departments (after excluding merged, liquidated, too remote and non computerised banks) was 79.3%, whilst the response rate was 56.9% for internal audit departments. The paper proceeds to discuss the main CAIS security threats and the adequacy of implemented security controls in the Egyptian banking industry. The significant difference between the two respondent groups as well as among bank types regarding the main security threats and implemented security countermeasures are investigated. Inadequate security controls have been discovered and some suggestions to strengthen weak points of the security controls in the Egyptian banking industry are proposed. Information is a valuable corporate asset, which should be protected with care and concern because business continuity and success are heavily dependent upon the integrity and continued availability of critical information. The reliance on information and rapidly changing technology forces organisations to implement comprehensive information security programs and procedures to protect their information assets; the success of implementing a security program relies largely on security awareness and compliance by employees. For many organisations, information itself has become the most valuable commodity or resources. Failure to secure information or to make it available when required to those who need it can, and does, lead to financial loss. Over the last several years, changes in technology have made computers much easier to use. However, user-friendly systems have created significant risks related to ensuring the security and integrity of computer and communication systems, data and management information. West & Zoladz (1993) stated that although computers provide many benefits, inherent security issues of computerised systems are often not addressed by management. Many organisations do not realise the importance of microcomputer security until an unauthorised modification to a payroll file, or some other event, occurs. Because information may be an organisation's most valuable asset, leaving it unprotected is tantamount to underinsuring fixed assets or inventory. Organisations can no longer afford to ignore the importance of information security in light of computer fraud, hackers, and computer viruses. Shriven (1991) argued that computer crime is almost inevitable in any organisation unless adequate protections are put in place. Since traditional financial control is usually insufficient to guard against these sophisticated crimes, the computer controller must get involved. In addition to computer crime, however, controllers also need to worry about the growing computer virus problem. Some virus attacks are directly associated with computer crime attacks. While technically skilled hackers and other from outside the company can be quite dangerous, potentially more dangerous criminals are the authorised users who can commit unauthorised acts. The computer crime problem is no longer a local problem and the security solution cannot be viewed only from a national perspective. Computer crime and information security have expanded from relatively limited geographical boundaries to become worldwide issues. This worldwide growth has direct implication for information security management (Sherizen, 1992). According to Williams (1995) any type of security breach, however minor, can become disruptive and expensive, so it must make better business sense to take a preventive approach. The sooner action is taken to safeguard information systems, the cheaper it will be for company in the long run. As automated accounting systems become more readily available to all types and sizes of businesses, the need to understand and employ adequate systems security becomes an issue no business owner can ignore (Henry, 1997).
Marketing on the Net: A Critical Review
Dr. S. Altan Erdem, University of Huston-Clear Lake, Houston, TX
Dr. Richard L. Utecht, University of Texas at San Antonio, San Antonio, TX
While e-commerce has been having an incredible role in marketing over the recent years, there have been some concerns about its potential “not-so-positive” effects on certain business settings. Many believe that the some aspects of e-commerce require changes on some of the basics of marketing. The purpose of this paper is to review some of these peculiarities of e-commerce and examine if they are likely to result in any changes in traditional marketing practices. It is hoped that the review of this paper will provide the marketing practitioners with added incentives to explore the e-commerce ventures further and develop practical insights to better the use of the net in their marketing functions. The impact of e-commerce on marketing distribution channels is far reaching and cannot be underestimated. Technological and market forces will determine the extent to which consumers can gain access to the information they desire (Alba et al. 1997). The tremendous growth in Internet use has led to a critical mass of consumers and firms participating in the global on-line market place. In the context of consumer sales, e-commerce businesses must embrace a strategy that seeks to serve the distribution requirements of all consumer market segments. As people become more comfortable with the web, traditional businesses have to find new ways of marketing to their customers in web environments. As they move to incorporate direct Internet sales into established distribution channels, they will face a daunting task. Businesses may face stiff opposition, both from within the organization and from established channel partners. This paper seeks to examine the explosive impact of e-commerce activity. It reviews the main effects of the net on marketing channels. The purpose is to examine issues such as the globalization impact caused by the Internet, the implications of the ever-increasing home shopping market, the issues facing consumers and businesses that utilize the Internet, and finally, the opportunities and challenges associated with marketing on the net. Technology is rapidly advancing every day, even as we speak. One of the greatest and most important advancements has been the World Wide Web. The web is an Internet service based on hypermedia, which allows users to explore the Internet easily through a browser. A browser is an interface used to organize incoming hypermedia or information (Ainscough and Luckett 1996). The Internet is, conceptually, a new and highly efficient way of accessing, organizing and sharing information. As a result, the Internet has created a huge impact in the business world. Companies are quickly moving to use the Internet as a way of segmenting markets and doing something that ordinary promotional media cannot: reaching consumers across the country and around the world interactively and on demand – all at a reasonable cost. The Changing Channel Structure: The Internet has made a tremendous impact upon the marketing channel. The traditional channel structure flow begins with the role of the manufacturer. The purpose of the manufacturer is to provide services, designs, and new products. Wholesalers are engaged in selling goods for resale or business use to retailers, and other educational and commercial industrial institutions. Retailers provide goods and services to the consumers. Because of the Internet, services or roles that channel members provide can be taken over by the consumer. Changing Role of the Consumer: Consumers have retreated to the retailer level, where they have assumed many of the functions formerly performed by the retailer. Retailers have moved back to the wholesaler or distributor level, where they often find themselves performing a more passive “supplier” role (English 1985). The role of the channel members is moving backwards. Electronic retailing will provide an explosion of products that are “equally available” to the electronic shopper. Consumers can purchase products directly from the manufacturers or wholesalers, usually at lower cost than from the retailer. In response to this changing role of the consumer, retailers have wholesale and warehouse network contacts. Wholesalers can also benefit from electronic consumers. Wholesalers can easily allow consumers to tie into their distribution systems. In fact, the role of wholesalers has not receded. Instead, they have simply expanded their operations to include a variety of new “retailers” as customers, retailers who in this case, happen to be individuals tied by computer lines. Changing Role of the Manufacturer: Advances in technology have also affected manufacturers. Actually, the consumer has taken over the initiative in product design, and the manufacturers’ new role is that of “component supplier,” a virtual sub-contractor level (English 1985). The flow between the members in the marketing channels is also based on the ability of one channel member to control the decision of another member. The emergence of electronic retailing will cause shifts in the balance of power in the system, and the consumer may gain too much control. The manufacturers would also benefit because their dependence on wholesalers and retailers would decrease. The losers would be the wholesalers and the retailers, because electronic technology narrows the gap between producers and consumers; it literally eliminates the channel itself.
Using Cost-Benefit Analysis for Evaluating Decision Models in Operational Research
Dr. Awni Zebda, Texas A&M University-Corpus Christi, Texas
Operation researchers and management scientists have recommended that the use of decision models should be subject to cost-benefit analysis. This paper provides insight into the cost-benefit analysis and its shortcomings as a tool for evaluating decision models. The paper also identifies and discusses the limitations of alternative evaluation methods. Understanding the limitations of cost-benefit analysis and the other evaluation methods is essential for their effective use in evaluating decision models. Over the years, management scientists and operational researchers have proposed quantitative and mathematical models to aid decision making in business organization. Decision models for problems such as capital budgeting, cash management, manpower planning, profit planning, and inventory planning and control represent an integral part of management science/operational research literature as well as the literature of the functional areas of management such as accounting, finance, marketing, personnel management, and production and inventory management. The development and use of decision models can be costly. Thus, establishing the value of these models is a necessary prerequisite for their use by practicing decision makers (e.g., Finlay and Wilson , Hill and Blyton ). According to Gass [1983, p. 605], "the inability of the analyst [and researcher] to demonstrate to potential users ... that a model and its results have ... credibility [and value]" is one of the primary reasons that models are not widely used in practice. In spite of its importance, the question of model value has not received much attention in management science/operations research literature (e.g., Finlay and Wilson , Gass , Miser ). The purpose of this paper, therefore, is to provide insight into the most widely recommended methods for evaluating decision models with special emphasis being placed upon cost-benefit analysis. Increased insight into cost-benefit analysis and other evaluation methods should benefit decision researchers, analysts, and practicing decision makers who are involved in the development and selection of decision models. The paper is organized around the following three questions. The first asks the deceptively simple question of what are the benefits of decision models? Hardly any generalizations could be made about the cost-benefit test unless one first separates the different types or kinds of benefits. In other words, a classification of benefits is needed. The second question is, what are the limitations of cost-benefit analysis as a means for establishing the value of decision models? Stated differently, is cost-benefit analysis empirically valid? The final question is, what are the alternatives to cost-benefit analysis and what are the limitations of these alternatives? The above questions are addressed in the following three sections, respectively. The last section provides a summary and concluding remarks. The discussion in the paper draws not only on MS/OR literature but also on behavioral science, economics, and the literature of the functional areas of management such as marketing, finance, accounting, personal management, and inventory management. Over the years, management scientists and operational researchers (e.g., Churchman [1970, p. B-44]) have indicated that the choice of decision models should be based on cost-benefit analysis. The basis for the cost-benefit test can be found in price theory and the economics and statistics literature dealing with the economics of information (e.g., Marschak and Radner ). Behavioralists also find the cost-benefit test appealing because it provides justification for the assumption of calculated or bounded rationality (e.g., Beach and Mitchell , Einhorn and Hogarth , Johnson and Payne ). The cost-benefit analysis suggests that decision models should be used only if their benefits exceed their costs. The costs of decision models include, among others, development costs, clerical costs, data collection and data processing costs, and training costs. The benefits of decision models are less obvious and may be classified along two dimensions (effect and time) as shown in Figure 1. Within the first dimension there are two types of benefits: the benefit of improving the consequences (outcome) and the benefit of improving the decision making processes. According to the second dimension, the two types of improvements may be classified as before the fact or after the fact improvements. Stated differently, actions can be better in a prior sense if they produce the most desirable results before they are implemented and their effects are observed and/or better in a posterior sense if they produce the best results after they are implemented and their effects are observed. Decision models may improve the decision-making processes in different ways. First, models may lead to more consistency in choice; consistency over time and consistency (consensus) among decision makers. In fact, as noted by many researchers (e.g., Churchman [1961, pp. 13-14], Einhorn , Hogarth [1987, p. 228]), the inconsistency displayed by people represents a primary reason that models are (and should be) used. Moreover, decision models (as other information systems) may improve the accuracy of the decision. Blackwell , for example, showed analytically that the value of an information system is a non-decreasing function of the system accuracy. Similarly, Johnson and Payne  noted that the choice of decision models is dependent upon their ability to provide accurate decisions. Improved consistency, consensus, and accuracy are the most widely mentioned benefits of decision models. However, decision models have other benefits. For example, decision models may reduce the decisional efforts required for decision resolution [Good, 1961]. The reduction in decisional efforts includes reducing time, resources, and efforts (including mental effort) used in making the decision (e.g., Shugan ). Decision models may also increase programmability and facilitate the delegation of the decision to a lower level of management. Increased programmability may help replace the human decision maker by a mechanical aid and, consequently, release some of his/her time and efforts. The released time and efforts can be used in solving more important problems. Improved delegation may ease the decision maker's total burden and provide him/her with essential thinking time. Increased delegation may also provide the lower level managers with essential training that would help them to grow and become more effective decision makers. The lower level managers may also make better decisions because they may have more time and/or more (local) information. In addition, models may improve the decision making process by adding decidability and structure (e.g., White [1969, p. 12]), objectivity (e.g., Bowen ), and scientific confirmation to the choice.
An Empirical Note on the Impact of the Price of Imported Crude Oil on Inflation in the United Kingdom
Dr. Richard J. Cebula, Armstong Atlantic State University, Savannah, GA
Dr. Richard D. McGrath, Armstong Atlantic State University, Savannah, GA
Dr. Yassaman Saadatmand, Armstong Atlantic State University, Savannah, GA
Dr. Michael Toma, Armstong Atlantic State University, Savannah, GA
This study empirically investigates whether the assumption by the Bank of England that rising prices on imported crude oil lead to domestic inflation in the United Kingdom has had validity. In a model where real GDP growth and money stock growth are both all allowed for, empirical estimation reveals compelling evidence for the validity of this assumption. In particular, the greater the percentage increase in imported crude oil prices, the greater the domestic inflation rate. In addition, oil price shocks involving imported crude oil price hikes of 40 percent or more in a given year further elevate the domestic inflation rate. During the last three decades, it has been commonplace among public policymakers as well as consumers to assume that rising prices on imported crude-oil act to increase domestic inflation; clearly, this constitutes a form of the so-called “imported inflation hypothesis” (i-i hypothesis). This assumption may have been predicated to some extent on the experience of the 1970s, wherein sharply rising crude oil prices imposed by O.P.E.C. nations were believed in so many nations to have systematically exacerbated domestic inflation. For the case of the U.S. and the other G7 nations, at least one study [Cebula and Frewer (1980)] found strong empirical support for the i-i hypothesis. For the 1955-1979 period, Cebula and Frewer (1980) find rising prices on imported crude oil to lead to increased domestic inflation in all of the G7 nations. More recently, Cebula (2000) provides similar findings for the U.S. for the more current period of 1965-1999. However, whereas there has been only a limited formal analysis of the i-i hypothesis as it involves crude oil prices for the U.S., even less such formal analysis has been performed for the other industrialized nations. Indeed, the Cebula and Frewer (1980) study is over two decades old. Given the resilience of the acceptance among policymakers in industrialized nations of the i-i hypothesis as it relates to the price of imported crude oil, it may be useful to provide a formal updated investigation of the hypothesis for industrialized nations other the U.S. Such is the purpose of the present study. In particular, for the period 1975-1999, this study empirically investigates for a large industrialized nation, namely, the United Kingdom, the i-i hypothesis as it relates to the price of imported crude oil. A simple model is found in section II of this study, whereas an empirical model and the data descriptions are found in section III. Empirical results are provided in section IV, whereas section V provides a summary of the findings. Based on the models in Cebula and Frewer (1980), Cebula (2000), and the standard IS/LM/AD/AS model, the inflation rate (P) is assumed to depend on a variety of demand-side and supply-side type factors. In principle, the demand-side influences presumably would include the following: -the percentage growth rate of real GDP (Y) -the percentage growth rate of the M2 money stock (M2) -the percentage rate of increase in imported crude oil prices (POIL) -the experience of crude oil price shocks (POILSHOCK) Presumably, the greater the growth rate of real GDP, the greater the growth rate of aggregate demand for goods and services and thus the greater the domestic inflation rate, ceteris paribus. Next, in the spirit of the monetarist tradition, the greater the growth rate of the money supply, the greater the growth in the aggregate demand for goods and services and hence the greater the inflation rate, ceteris paribus. This money-aggregate demand-inflation linkage could assume a variety of forms, including those of a simple wealth effect, lowered interest rates, and/or a wealth effect involving rising equity prices buoyed upwards by a rising money supply growth rate and/or by higher bond portfolio values resulting from lower interest rates. Next, the greater the rate of increase in the price of imported crude oil, the greater may be the expected inflation rate; in turn, the latter presumably accelerates the growth rate of aggregate demand and leads to higher actual inflation as households endeavor to “beat” or at least insulate themselves from the expected inflation. Finally, as a by-product of the effects of POIL, an oil-price shock (POILSHOCK), in which there is a sudden and dramatic increase in the price of imported crude oil, is likely to produce (as the market reacts) a sudden and dramatic increase in expected inflation and hence in actual inflation, ceteris paribus. Presumably, the greater the rate of increase in the price of imported crude oil, the greater the rate of increase in both oil-product-related production costs and transportation costs for a broad spectrum of goods and services; therefore, to the extent that increased production and transportation costs are passed on to final consumers, the greater the actual inflation rate, ceteris paribus. Similarly, crude-oil-price shocks presumably tend to exercise a sudden and dramatic increase in production costs for certain goods and services and transportation costs for a broad spectrum of goods and services and hence act to elevate final product and service inflation rates as well. Finally, to the extent that increases in POIL and oil price shocks are experienced and lead to expected inflation, the greater the upwards pressure on (1) nominal wage rates and (2) nominal interest rates, i.e., borrowing costs, and hence--to the extent that such costs are passed on to final consumers-- the greater the inflation rate of final commodity output, ceteris paribus.
Application of Taguchi Methods for Process Improvement for Tubular Exhaust Manifolds
Dr. C. P. Kartha, University of Michigan-Flint, Michigan
Taguchi Methods refer to quality improvement activities at the product and the process development stages in the product development cycle. It is based on the realization that significant improvement in quality can be achieved by engineering quality into a product at the front end of the product cycle, which is the design stage rather than at the manufacturing stage. This paper discusses the theoretical and the practical aspects of Taguchi Methods. An application of Taguchi Methods to optimize a production process is also discussed in the paper. The process involves production of an automotive exhaust manifold which had a quality problem involving excessive weld in a port opening that restricted passage of the required gauge and prompted a hand grinding operation. Through a Taguchi experiment the problem was successfully solved eliminating the tedious hand grinding process. The improved process also resulted in significant cost reduction and increased efficiency. Taguchi Methods, also known as Quality Engineering Methods, refer to quality improvement activities at the product and the process design stages in the product development cycle. The traditional quality control methods are designed to reduce variation during the manufacturing stage. The emphasis has been on tightly controlling manufacturing processes to conform to a set of specifications. Taguchi Methods is based on the realization that significant improvement in quality can be achieved by engineering quality into a product at the front end of the product cycle, which is the design stage rather than the manufacturing stage. By this method, variables that affect product quality are analyzed systematically to determine the optimum combination of process variables that reduces performance variation while keeping the process average close to its target. An important element of this method is the extensive and innovative use of statistically designed experiments. This method has gained immense popularity in the United States in recent years. Though Taguchi Method has been used successfully in Japan since the sixties, it was not used in the U.S. until early eighties. The first English translation of the work published by the Central Japan Quality Control Association appeared in 1980. Subsequently a number of successful applications of the method were reported by U.S. manufacturers . Kacker  and Hunter  gave excellent overviews of Taguchi's ideas and helped clarify questions surrounding terminology and formulations of the methods, thereby exposing the method to a much wider audience. A critical evaluation of the method was given by Box . This paper discusses an application of Taguchi Methods to optimize a manufacturing process that involves the production of an automotive exhaust manifold. A brief overview of the method , ,  is presented at first followed by the discussion of the experiment for process improvement. Let Y be the performance characteristic of a manufactured product and let E(Y) = m and the variance VAR (y)= s2. Let t be the target value of the performance characteristic. Ideally, E(Y) = m = t. The variance s2 represents the product's performance variation and is caused by variability in the measurements, fluctuations in the environmental variables such as temperature, humidity, etc., product deterioration and manufacturing imperfections. The smaller the performance variation around the target value, the better is the quality of the product. Countermeasures against performance variation caused by environmental variables and product deteriorations can only be built into the product at the product design stage. Taguchi introduced a three-stage procedure for assigning optimum values of product and process design characteristics. The first stage, known as System Design, is the process of applying scientific and engineering knowledge to produce a basic prototype model. The second stage, Parameter Design, is a process to identify optimum combinations of process characteristics that minimizes performance variation. Usually, these will be an improvement over the initial settings of the System Design. The third stage, Tolerance Design, is a procedure to determine tolerances around the optimum parameter settings identified as the Parameter Design stage, so that the overall cost is minimized. Taguchi defines quality as `the loss imparted to the society from the time a product is shipped.' The loss is measured in monetary units and is proportional to the variances in the performance characteristic Y from its target value t. The target values are to be stated in nominal levels rather than in terms of interval specifications only. Two products that are designed to perform the same function may both meet specifications but can impart different losses to society. Parameter design is an important step of product design. During parameter design stage, experiments are conducted to identify process design factors that minimize the expected loss (3). The objective is to study the effects of k factors x = (x1, x2, ... xk) upon the expected response µ. An experimental design D is used to explore this relationship, each experimental run representing separate settings of the k factors. A typical parameter design experiment consists of two parts: an inner array and an outer array. Kacker  refers to these as design matrix and the noise matrix. The columns of the inner array represent a selection of k product design factors, entries in the columns representing test settings of these factors. Each row of the inner array represents a product design. The columns of an outer array represent noise factors with each row representing different combinations of noise factors. Taguchi proposes orthogonal arrays to be used as inner and outer arrays. In assessing the results of experiments, Taguchi recommends signal-to-noise ratio SN to be used as the performance statistic. The SN ratio is a function of m/s which is the inverse of the coefficient of variation. When the response variable is continuous, the loss defined by L(Y) takes on one of the three forms depending on whether smaller is better, larger is better or a nominal value is the best. The SN rations are defined in such a way that maximizing SN will be equivalent to minimizing the expected mean squared error about the target and hence the loss. The combinations of the design variables that maximize the signal-to-noise ratio are then selected for further consideration as product or process parameter settings. Taguchi suggests a two-stage procedure to arrive at the optimum combination of factors that maximizes the SN ratio while maintaining the mean response on the target.
Comparative Assessment of the Resume and the Personal Strategic Plan: Perspectives of Undergraduate Business Students, Human Resource Professionals and Business Executives
Dr. Lee R. Duffus, Florida Gulf Coast University, Florida
This research sets out to assess the perceptions of the relative efficacy of the resume and the Personal Strategic Plan (PSP) as vehicles for employment prescreening, career development, and job advancement. The target groups were undergraduate business students, human resource professionals and business executives. The results indicate that respondents perceive the resume as adequate for employment prescreening situations. However, compared to the PSP, the traditional resume is perceived as less effective in communicating nuanced information on individual characteristics that will position and advance current employees along the career ladder toward attainment of their career objectives. The study concludes that human resource specialists should emphasize increased usage of the PSP among current managerial employees instead of the traditional resume for situations involving career development or job advancement. This will improve both the efficiency of the prescreening process, and enhance the likelihood of employment decisions that are congruent with the strategic human resource needs of the organization and the career objectives of the employee. In response to the observation by some researchers and human resource professionals that the format and content of the traditional resume limits its effectiveness as a presentation format for personal, performance and career information (O’Sullivan 2002, Otte and Kahnweiler 1995), several authors suggest the need to develop a) a career plan (O’Sullivan 2002, Portanova 1995, Otte and Kahnweiler 1995), b) a personal business plan (Stokes Jr. 1997), and c) a personal development plan (Higson and Wilson 1995, Bullock and Jamieson 1995, Barrier 1994). Unfortunately, none adequately emphasize the marketing process, nor are they sufficiently broad-based to be effective in employment prescreening or other tasks involving employee development and advancement. In an attempt to bridge this gap, some authors suggest development of a personal strategic plan (Duffus 2001, Winchester 1999, Boivie 1993). Strategic planning is a managerial concept that specifies where the organization is headed and how management intends to achieve the targeted results (Thompson and Strickland 1992). The premise of modern marketing is identification of the needs and wants of consumers then delivering the product offerings that satisfy them. Cravens (1994), combines these concepts by defining strategic marketing planning as “the process of matching the resource capabilities of an organization with the current and future market and competitive situation, then structuring the marketing offer to achieve future objectives.” While strategic planning tends to be utilized at the organizational level, it is equally applicable at the individual level in human resource management activities involving getting a job, career development, planning and advancement. A review of the literature on personal business plans (PBP's), personal development plans (PDP's) or career plan's (CP's) identified no common definitions or content structure. However, even though they differ in detail with respect to content, similarities are evident. In addition to having identified objectives and planned pathways for their achievement, and involving only current employees, similarities include being 1) internal in focus, 2) mute to the impact of environment and competitive forces on attainment of the organizations strategic objectives and 3) being an ongoing process. The Personal Strategic Plan (PSP) is a personalized version of a strategic marketing plan. It is both a written document that outlines the time-related details for achieving the career expectations of an individual, and a living document that is used along the employment continuum, from job seeking to employment. More specifically, the PSP is “a marketing tool that introspectively and objectively assesses and positions an individual in the context of personal career objectives, the organizations strategic human resource needs and objectives and the changing and competitive job environment” (Duffus 2001). Especially among potential employees, the review of the resume is the first stage in the interview-job selection process, which often includes multiple interviews, confirmation of information provided, and verification of accomplishments (Stokes Jr., 1997, Hornsby and Smith 1995). Inasmuch as the primary purpose of the resume is to secure a job interview (Sharp 1991, Rauch 1991), it is the most important part of the prescreening process.According to Dutton (1996), the potential applicant should first assess the job requirements and human resource needs of potential employers to learn about the task and qualifications. Following this, he or she should prepare a resume that is congruent with the job task and desired qualifications. Several authors have discussed the importance of the resume (Canter 1998, Hornsby and Smith 1995, Braham 1993, Asdorian 1992, Garceau and Garceau 1992, Rauch 1991, Nesbit 1989, Oliphant and Alexander 1982), length (Farr 1994), appearance (Augustin 1991), and its content and construction (Hornsby and Smith 1995, Culwell-Block and Sellers 1994). In general, the resume should be succinct, standardized in format, logical, sequential, clear, and only one to two pages long (Rauch 1991). Despite consensus on the importance of the resume in the job search process, some scholars argue that the PSP by focusing on career rather than job, and addressing the match between the dynamic and strategic needs of the organization and employee skills is a more efficient tool, both in satisfying the strategic human resource needs of the organization, and the personal development and career goals of the individual (Winchester 1999, Stokes 1997, Otte and Kahnweiler 1995, Portanova 1995, Higson and Wilson 1995, Boivie 1993). For situations involving jobs where the tasks are well defined, a simple resume is often adequate to assess the match between employee skills and capabilities and job-task needs. However, in spite of its wide usage, some researchers have long argued that the format and content of the traditional resume limits its effectiveness in presenting information to help staff cross the bridge into management or facilitate career development and advancement (Higson and Wilson 1995, Barrier 1994). At least five perceived disadvantages to the resume apply both to people seeking employment and those seeking advancement. First, primarily because the traditional resume is a snapshot of where we are and where we have been during our career, it is historical rather than future oriented (Stokes Jr. 1997). Though it may relate to specific job situations, it does not adequately address the strategic human resource needs of the organization or how the individual is likely to measure up to them (O’Sullivan 2002, Winchester 1999, Stokes Jr. 1997). Second, it does not adequately capture the career expectations of the individual or outline their perceptions of the strategic path to achieve them (Winchester 1999, Bandura 1997, Otte & Kahnweiler 1995, Boivie 1993.
Course Content on Managed Care: The Graduate Program in Health Services Administration at Florida International University
Dr. Kristina L. Guo, Florida International University, North Miami, FL
Education on managed care is essential to student career advancement and organizational survival. To ensure that students are adequately prepared to face and manage in the evolving managed care environment, this study discusses the degree of coverage on managed care concepts in the curriculum of the Graduate Program in Health Services Administration at Florida International University. Using the 3rd Year Progress Report to the Accrediting Commission on Education in Health Services Administration and courses syllabi, the findings indicate that of the 17 graduate courses in HSA, 14 courses offer a wide range of managed care content. Through an interdisciplinary approach and continuous curriculum improvement, faculty emphasize upon critical skills and knowledge which enable students to analyze and respond to managed care challenges in actual health care practice. The evolving complexity of the health care system has led to the increased use of managed care to contain health care costs, improve access to care and deliver healthcare more efficiently (Shortell et al. 1996; Knight 1998; Kongstvedt 1997, 2001; Wenzel 1998). While managed care has rapidly become the primary delivery system for health services, it also creates numerous challenges for health care professionals. One of the main problems is adequately preparing health care professionals to understand the nuances of managed care given the continuous systemic, environmental, political, economic and organizational changes (Brown and Brown 1995; Ziegenfuss, Jr and Weitekamp 1996). Making sense of the alphabet soup of managed care (HMOs, PPOs, POS, etc) is difficult and often tricky. Health care professionals find themselves working among constant intricacies and ambiguities. To gain a solid foundation in managing the structure, finance and delivery is essential to career advancement and ultimate survival of organizations. At Florida International University, the curriculum of the Health Services Administration Program provides the opportunity for students, who are currently working in the health care field as administrators and clinicians and for students striving for administrative positions, to expand their knowledge and be better prepared to work in health care settings involving various aspects of managed care. This article describes the Graduate Program in Health Services Administration at Florida International University which awards the Master’s degree in HSA (MHSA). Specifically this article outlines the course content on managed care and its integration throughout the curriculum. Managed care is dominating the healthcare industry. The enrollment in Health maintenance organizations (HMOs) reached 81.3 million as of January 1999 (Fox 2001). Managed care has taken root and thrives in many forms. Barton (2000) suggests that managed care is difficult to define and has different meanings based on the rapid evolution of new organizational forms. However, the key to various definitions is managed care’s integration of financing and delivery. Fox distinguishes managed care techniques, such as financial incentives, promotion of wellness, utilization management, from managed care organizations that actual perform those functions (2001). The interest in managed care is prevalent and escalating throughout the health care system. The history of managed care details its humble birth. In 1910, the Western Clinic offered a broad range of medical services for its members based on a fixed premium per month. Despite controversies and opposition, managed care struggled to survive. Other early examples include the 1929 Baylor Hospital’s prepaid health insurance arrangement, the 1937 Kaiser Foundation Health Plans and the 1944 New York City’s Health Insurance Plan. Throughout the 1960s and early 1970s, managed care only served a very modest role in the financing and delivery of health care. The growth of managed care came about with the 1973 federal HMO Act which authorized start up funds and access to HMOs through employer based insurance. Although the Act was encumbered by numerous requirements, it nevertheless signified increased government attention in managed care (Fox 2001). From the latter part of the 1970s to the 1980s, managed care established a firmer foundation. As managed care continues to evolve, distinctions in the various managed care organizations have become blurred. Restructuring plays a major role and consolidation occurs on a daily basis. The brief background on managed care described above underlines the need for education in managed care. First and foremost, education ensures that stakeholders have skills, knowledge and understanding of managed care to make rapid and sound decisions that will strategically prepare their organizations not only to face the competition, but to stay ahead. Health care leaders know that continuous education will provide them the leverage to meet the needs of the changing environment. Second, education aids in the identification, understanding and managing complicated relationships among various key stakeholders, including managed care executives, hospital administrators, physicians, other clinicians, health care professionals and patients. Ensuring patient satisfaction in managed care through the promotion of patient education and wellness, and focusing on quality and access is a priority. Establishing good working relationships among administrators, physicians and employees requires finesse. While some providers may view managed care organizations with mistrust, others perceive that collaboration is necessary in order to decrease competition. Furthermore, shedding light on relationships between private and public health care sectors is also crucial.
Global Economic Scenarios: In the Twenty-first Century for the Future of World Economic Development–The Allen Hammond Scenarios Projections for the Future
Dr. Richard G. Trotter, University of Baltimore, Baltimore, MD
This paper is an examination of world capitalism and economic and social development in terms of Allen Hammond’s book, Which World? Scenarios for the 21st Century (1) which sets forth three scenarios for world economic and social development for the next fifty years. Hammond, in his book, develops three scenarios for world economic development; (1) Market World, (2) Fortress World, (3)Transformed World. This paper looks at these three scenarios in terms of world economic trends as well as the value systems of the societies in question. Additionally, global trends will be looked at in terms of such prescriptions for economic performance as privatization, free trade, and equitable distribution of income. The major areas of the world will be examined in terms of how efficiently these strategies and prescriptions for economic success and development have been implemented and how successful they have been. With the fall of Communism in 1989 and the spread of market capitalism throughout the world in the 1990s, it was assumed by many practitioners and scholars that world capitalism was the key to human hope, prosperity and development. In the first decade of the twenty-first century the optimism of the last decade of the Twentieth Century is being reexamined in terms of the realities of market capitalism as a panacea of human development. Additionally, societies are questioning in terms of their own particular values what kind of economic and social system they want and the trade off that the society is willing to make with respect to the benefits and burdens of market capitalism. The major areas of the world will be examined in terms of how effectively those strategies are prescriptions for economic success and how successfully they have been implemented. The most economically developed regions of the world, the United States, Europe, and Japan, while all using capitalism as the economic system of choice, have taken different roads. According to Martin C. Schnitzer, three varieties of capitalism have evolved: (1) Individualistic capitalism as practiced in the United States emphasizes (a) individualism; (b) short-term profit maximization; and (c) large income differentials. (2) Communitarian capitalism, also referred to as social market capitalism, is the major form of capitalism, as represented in Germany and Western Europe. This system has elaborate social welfare programs, less income inequity and an expanded state role. (3) In state-directed capitalism, as it exists in Japan and other East Asian countries, there is a closer relationship between government and business. While the Japanese are provided with social programs such as those in Europe, these programs are provided in large part by the business firms. (2) At the present time the former Soviet Union (the Russian Federation), Poland, the Czech Republic, and Hungary represent a dramatic transition from regulated state economies to countries seeking to replace the economics of Communism with the economics of market capitalism, often with mixed results. The economic dilemma of maintaining some of the aspirations of Communism (such as income equity) and a reduction of the uncertainties and inequities of market capitalism (such as unemployment) will have to be addressed. Immediately after the demise of Communism as an economic and social system, the Russian Federation sought to implement the transition to market capitalism through the privatization of industry and agriculture, the development of a modern banking system and the introduction of democratic political institutions. Tragically, at the beginning of the twenty-first century the economic and social results have been poor for the Russian Federation. The economic conditions of the vast majority of Russians may well have deteriorated from their lives under the Communist rule, with the development of dramatic income and wealth distribution inequalities. In addition to the above political corruption is high, with privatization taking the form of a shift in the ownership of enterprises from the state to a corrupt elite. Additionally with a great shortage of goods, barter arrangements have developed. Poland, the Czech Republic, and Hungary have developed economically better than the Russian Federation, privatization was undertaken, as well as the introduction of democratic institutions. Of all the European block nations that performed relatively well in the 1990s, Poland had the best economic performance. Corruption in government is limited and inflation and unemployment have been limited. Privatization in the Czech Republic took place with relative effectiveness. During the early 1990s the economic performance of the Czech Republic was relatively good, but during the later part of the decade the economic performance declined. Hungary instituted programs with relative success, as did Poland and the Czech Republic. During the later 1990s Hungary showed signs of economic improvement. The rampant corruption evident in the Russian Federation is not prevalent in Poland, the Czech Republic, and Hungary. However, while not as extreme as in the Soviet Union, there has been some increase in poverty and widening disparities in the distribution of income. It has been hoped that market capitalism, or some variant of it, will help the so called less developed countries. For the purpose of this paper the less developed countries in terms of the Hammond scenario will include (1) China, (2) India, (3) Latin America: Argentina, Brazil, Mexico, (4) Africa, (5) and the Middle East.
Security of Computerized Accounting Information Systems: An Integrated Evaluation Approach
Dr. Ahmad A. Abu-Musa, Department of Accounting & MIS, KFUPM, Saudi Arabia
Evaluating the security of CAIS is not an easy task. Reviewing the available literature in this area reveals a lot of confusion and inconsistencies since research in evaluating the security of information system is considered to be in its infancy. In this paper, evidence regarding the need as well as the importance of evaluating the security of CAIS has been covered. The alternative approaches for evaluating CAIS security used and implemented in previous literature are presented. Moreover, the requirements for selecting appropriate security countermeasures as well as implementing an effective evaluation security technique have been discussed. In this paper, the need as well as the importance of evaluating the security of CAIS will be outlined. The different alternative approaches for evaluating the security of CAIS will be considered and the main requirements for implementing an information security tool will be presented. In addition, the limitations and problems concerned with information security evaluation methods will be mentioned. Goodhue et al. (1991) have mentioned that there are numerous methodological questions regarding how to clearly measure security concern. Although many of the previous studies have employed user perception as an empirical measure, such measures may lack theoretical clarity, because they lack a theoretical underpinning. The most commonly cited reference discipline for these measures has been job satisfaction research. However, “IS satisfaction” has not been well enough defined to clarify how it is similar to or different from “job satisfaction” (p. 15). Risk analysis of the information technology environment represents another approach for evaluating information security. A literature review by Eloff et al. (1993) indicated that inconsistent terminology had been used in previous studies. These differences in terminology gave rise to the need for a standard set of terms to be used for the comparison of various risk analysis methods. As Kumar (1990) points out, evaluation in general serves to: Verify that the system met requirements;Provide feedback to development personnel;Justify the adoption, continuation or termination of a project;Clarify and set priorities for needed modifications; and Transfer responsibilities from developers to users (from, Conrath et al., 1993, p. 267).According to Symons et al. (1993), evaluation plays a crucial role at many stages of information systems development. Before introducing a new system a feasibility study should be done to appraise it and to decide whether to purchase it or not. During implementation, evaluation functions as a learning and control mechanism; it is usually done informally. Post-implementation evaluation, although it is theoretically valuable, is rarely carried out by organizations. However, all types of business systems have a need for internal controls and security safeguards. During the design, implementation, production and maintenance phases of a business life cycle, some form of documentation of the controls provided in the system is necessary as means of communicating between designers, users and auditors and for purposes of preserving information for subsequent use by users and auditors who might not have been initially involved in the system development cycle (Computer Security Auditing and Controls, 1991). Solms (1996) has noted that nowadays many business partners need to link their computer systems for business reasons, but that, they first want to receive some sort of proof that their partners have an adequate level of information security in place. He also suggested that a security evaluation and certification scheme that could instil confidence and assurance regarding information security status to external and business parties would solve a lot of problems for the commercial world. Accordingly, the objective of Solms’s paper in 1996 was to prove that the commercial world needed some information security evaluation scheme: that could provide assurance to internal as well as external parties that adequate security controls were installed; and could define a set of criteria which such a security evaluation scheme must satisfy to be successful. Evaluating the security of CAIS is not an easy task. Reviewing the available literature in this area reveals a lot of confusion and inconsistencies since research in evaluating the security of information system is considered to be in its infancy stage. Conrath et. al, (1993), in an extensive search of the information systems evaluation literature, revealed that there were no generally accepted performance measures. Since information security is an important integral part of the accounting system, so it comes under the umbrella of that result. In the following section, the researcher will briefly present the alternative approaches that could be used and implemented in evaluating the security of CAIS. According to Solms (1996) a number of evaluation and certification techniques could be linked to information security. These techniques are: Trusted Security Evaluation Criteria Schemes; ISO 9000 (BS5750), the leading international quality assurance scheme;
Security of Computerized Accounting Information Systems: A Theoretical Framework
Dr. Ahmad A. Abu-Musa, Department of Accounting &MIS, KFUPM, Saudi Arabia
It has been claimed that “security of computerized accounting information system (CAIS)” is ill-defined term in the accounting literature. The current paper is conducted in response to numerous calls for research, that have emphasized the necessity of conducting theoretical research to enhance the body of knowledge concerned with CAIS security. The paper addresses the concept of CAIS security and its main components in an attempt to clarify confusion in that area. Through theoretical conceptualization of information and systems security an integrated theoretical framework of CAIS security has been introduced. In this paper, the concept and the meaning of CAIS security will be presented. The importance of the issue of CAIS security as a significant element for an organization’s success and survival will be discussed. The security objectives of CAIS and its main components will be highlighted; and finally, an integrated approach to CAIS security will be presented. Security is an ill-defined term in the technical literature. It has been used to denote protection and well being of political entities, as in the term “national security”. It may also refer to industrial protection by the security or protection departments. Police forces having limited responsibilities are also sometimes called security forces. The standard lexical definition equates security with freedom from danger, fear, anxiety, uncertainty, economic vicissitudes, and so forth (Parker, 1981, p. 39). Granat (1998) argued that the term “security” might mean different things to different people. To some of them it is a concern for preserving the “date” integrity of existing database records into the new millennium; to others, it is securing privacy for proprietary and restricted information; to yet others, it means preserving original records and protecting their integrity. The International Information Technology Guidelines issued by the International Federation of Accountants (IFAC) in 1998 stated that, “The concept of security applies to all information. Security relates to the protection of valuable assets against loss, disclosure and damage. In this context, valuable assets are data and information recorded, processed, stored, shared, transmitted, or retrieved from electronic media. The data or information must be protected against harm from threats that will lead to its loss, inaccessibility, alteration or wrongful disclosure”. However, most of the literature defines information security as the protection of information confidentiality, integrity and availability. This definition is used as equivalent to “prevention against security breach”. Accordingly, information security could be also defined as “the prevention of the unauthorized disclosure, modification or withholding of information”. For example, Marro (1995) defined information security as: “the protection of, and recovery from unauthorized disruption, modification, disclosure or use of information and information resources, whether accidental or deliberate”. Joy and Bank (1992) defined the security of an information system as: “the ability to ensure that only legitimate, authorized transactions and / or data are input to, and output from, a system with no unauthorized, illegitimate insertions, deletions, modifications or repays occurring between the time of input, and the time of receipt by the intended recipient”. Reviewing the available literature, it seems that there is some confusion regarding the terms “security”, “information security” and “information systems security”. Most of the previous literature used these terms interchangeably to mean the protection of information confidentiality, integrity and availability in an organization. According to the information security glossary, however, there are clear distinctions between the terms “Security”; “Information Security”; “IT System Security”; and “Information Systems Security”. “Security” is defined as “the protection of information availability, integrity and confidentiality”. This definition is exactly equivalent to both “the prevention of a breach of security” and “the prevention of the unauthorized disclosure, modification or withholding of information”. According to this glossary, the term security is used here in the sense in which it is most commonly used in military and government circles (ftp://ftp. cordis .lu/pub/ infosec/docs/ s2001en.txt). The term “Information Security” is defined as “the combination of confidentiality, validity, authenticity, integrity and information availability” (Ibid.). Therefore, the term “Information Security” is not as wide as “Information Systems Security”, since it does not address the security issues related to the protection of the facilities used in handling information. However, it addresses all the other security issues associated with the protection of the information (information availability, integrity, confidentiality, authenticity and validity); and is thereby wider than the definition of “security”. On the other hand, the term “IT System Security” could be defined as “the combination of system availability and of the information security of the software and associated parameters forming part of the IT system itself”. Finally, “Information Systems Security” is defined as “the combination of information security and IT system security for a given information system”. Therefore, the term “information systems security” is the widest in meaning of this set of terms, covering all aspects of the security of the information and data handled by the information system, as well as the IT resources themselves (including software and associated control tables). The definition covers all aspects of security relevant to an information system (see, again, ftp://ftp.cordis.lu/pub/infosec/docs/s2001en.txt).
Dr. Jae J. Lee, State University of New York, New Paltz, NY
This paper explores a Monte Carlo approach to deal with the parameter uncertainty in extracting signals from economic time series. This paper explores the use of Monte Carlo integration with Acceptance/Rejection Sampling. This approach assumes that a set of unknown parameters is a random vector so that the parameter uncertainty can be eliminated by integrating out the vector. By running a simulation study, this approach compares with the commonly used approach in terms of mean square errors. Many economic time series Zt (possibly after transformation) can be written as the sum of an unobserved signal component and a nonsignal component , namely, where the components and follow ARIMA model specifications. For example, in repeated sample surveys (Scott et al., 1977; Bell & Hillmer, 1990), is an estimate of the population value derived from standard sample survey methods, and is sampling error, with observable at time t. In seasonal adjustment (Box et al., 1978; Hillmer & Tiao, 1982), is an unobserved seasonal component and is an unobserved non-seasonal component, with observable at time t. Extracting signal component from the observed is to find a minimum mean square estimator (MMSE) of the unobserved signal component, St, and its mean square error (MSE) at some point in the sample. Conditional on the full set of observations Z =, the MMSE of the signal component, , is its expectation and the MSE is its variance, namely, where is a vector of parameters in ARIMA specifications of and (Harvey, 1993). MMSE and MSE of signal component can be obtained with the Kalman filter/smoother with ARIMA specifications of signal and nonsignal components. In practice, however, since ARIMA specifications of the components are usually unknown, ARIMA specifications of the components must be estimated from information available. Forms of ARIMA models are identified using information of observed time series and knowledge about nonsignal component such as sampling errors. After forms of ARIMA models of the components are identified, a commonly used approach is to use a maximum likelihood estimate (MLE) of the vector . Then, using the identified models with the MLE of the vector, the MMSE for and , and their MSE are obtained by applying Kalman filter/smoother. More details are found in Dagum et al. (1998) and Pena et al. (2001). Therefore, in practice, extracting signal component contains two sources of uncertainty: model uncertainty due to the fact that ARIMA models for and are unknown and parameter uncertainty due to the need to estimate the parameters in the identified ARIMA models. In this paper, an approach to deal with parameter uncertainty in extracting signal component is investigated. The proposed approach is to treat as a random vector and find posterior density based upon Bayesian inference. Then the approach is to use the posterior density as an integrand function in order to integrate out the vector . Monte Carlo approach is used to integrate. Acceptance/Rejection sampling is used as a sampling scheme. With a simulation study with widely used ARIMA models, performances of the approach are compared with those of commonly used approach, that is using maximum likelihood estimates (MLE) of . The performances of each approach are measured using mean square errors and mean biases for extracted signal components. For purposes of this investigation, it is assumed that the forms of ARIMA models for and are known but a vector is unknown. Section 2 discusses how to apply Bayesian inference to find a posterior density of the vector and how to modify the form (2) using the posterior density. Section 3 illustrates Monte Carlo approach to integrate out the vector. Section 4 shows Acceptance/Rejection sampling scheme. Section 5 illustrates designs of a simulation study. Software used, and conclusions and discussions are followed in Section 6 and Section 7, respectively. Bayesian inference assumes that the vector is a random with a prior probability density function (pdf),. Given observed data Z, a posterior pdf of the vector , is obtained using Baye's theorem, where is the likelihood function and means 'proportional to'. Equation (3) reflects that both the prior information about the vector through the prior pdf and all the sample information about through the likelihood function are combined to specify the posterior pdf. The likelihood function can be computed for any particular values of the vector . A prior pdf should be chosen to represent any available prior information. In Bayesian inference, specifying a prior pdf is subjective. Therefore, a different prior pdf may result in a different form of posterior pdf. However, as the sample information grows, the posterior pdf computed from (3) is dominated by a likelihood function and converges to the same posterior pdf for most prior pdf's. Since this paper investigates situations in which there will be a large sample size, k, it is appropriate to use Jeffreys' diffuse prior (Zellner, 1971). The MMSE of signal component and its MSE, unconditional to the vector, can be obtained from (4) and (5). It is noted from (4) and (5) that the integration is done with respect to the posterior pdf. Thus, the problem of eliminating parameter uncertainty in extracting signal component can be reduced to the integration of functions of and with an integrand, the posterior pdf. The integration in (4) and (5), however, cannot be computed analytically since a likelihood function of the model (1) does not have a closed form, and therefore a posterior density from (3) could not have a closed form. Monte Carlo approach is an appealing approach to integrate where an analytical solution is not possible.
Toward An Interdisciplinary Organizational Learning Framework
Dr. Tony Polito, East Carolina University, Greenville, NC
Dr. Kevin Watson, Marist College, Poughkeepsie, NY
Organizational learning theory is multidisciplinary. There is no current consensus regarding a model for organizational learning theory. Even a consistent definition of organizational learning has been elusive within the literature, though typologies of organizational learning theories are found. This paper searches for points of agreement regarding organizational learning among organizational theorists, then gives special attention to the economic perspective of organizations and learning. Organizational learning is comprised of both behavioral and cognitive processes; higher-level learning, unlike lower-level learning, involves adaptation. Researchers disagree as to whether either change or effectiveness are requisite to organizational learning. Researchers generally agree that organizational learning that does effect change involves systematic shock anticipated by tension, but differ regarding the constitution of that shock and tension. Specific economic perspectives of the firm can also provide a framework for organizational learning theory. Much of the neoclassical theory of the firm, a set of human‑resource holders maximizing profit under a known production function, is under question. Organizational theorists now generally embrace the relevant transaction cost and agency perspectives. Harvey Leibenstein, Harvard economist, views the firm in terms of internal efficiency, embraces Argyris & Schön’s perspective of organizational learning as a process of error handling, sees the individual actor’s motivation to admit, detect and correct error as a special case of the productivity problem, and analyzes it from a game‑theoretic, agency‑like manner. Leibenstein’s perspective respects much of the noted concordance regarding organizational learning. Organizational learning theory is multidisciplinary (Dodgson, 1993). Within the literature, researchers note the relevance of psychology, organizational theory, innovation management, strategic management, economics, organizational behavior, sociology, political science, information systems, anthropology, and production/industrial management (Argyris & Schön, 1978b; Dodgson, 1993; Fiol & Lyles, 1985; Leibenstein & Maital, 1994; Perrow, 1986; Shrivastava, 1983). In fact, Argyris and Schön type organizational learning theories parallel to types of associated disciplines (Argyris et al., 1978b). There is, however, a noticeable absence of a multidisciplinary synthesis of organizational learning research (Huber, 1991). Dodgson believes such a synthesis will serve to avoid the introspective and parochial views seen in the existing literature and that synthesis is requisite for future research (Dodgson, 1993). This paper searches for points of agreement among organizational theorists, gives special attention to the economic perspective of organizations and learning, and focuses on Leibenstein’s perspective as a point of intersection. There is no current consensus regarding a model for organizational learning theory. Within a special edition of Organization Science on organizational learning, Simon states that organizational theorists should strive towards a higher level of consistency of terminology in describing organizational learning, perhaps drawing on those of cognitive psychologists (Simon, 1991). Other researchers similarly note the lack of a widely accepted theory or model. Even a consistent definition of organizational learning has been elusive within the literature, as seen in the partial, chronological listing below: The adaptation of organizational goals, attention rules and search rules as a function of its experience (Cyert & March, 1963) A series of interactions between adaptation at the individual/subgroup and organizational levels that is stimulated through stress (Cangelosi & Dill, 1965) The ability to detect and correct error, the mismatch of outcome to expectation (Argyris et al., 1978a) The process by which knowledge about action-outcome relationships and the effects of the environment on these relationships is developed (Duncan & Weiss, 1979) The development of insights, knowledge, and associations between past actions, the effectiveness of those actions, and future actions (Fiol et al., 1985) The encoding of inferences from history into routines that guide behavior (Levitt & March, 1988) The continual expansion of the organization’s capacity to create its future (Senge, 1990) The acquisition of knowledge by any of its units that it recognizes as potentially useful (Huber, 1991) The skill of creating, acquiring, and transferring knowledge, and of modifying its behavior to reflect new knowledge and insights (Garvin, 1993) Typologies of organizational learning theories are found. Shrivastava types four organizational learning perspectives: the process of organizational adaptation, the process of sharing and changing assumptions, the development of an action-outcome knowledge base, and the institutionalization of experience (Shrivastava, 1983). Argyris & Schön define six categories that are based on organizational definitions: organization as group, as agent, as structure, as system, as culture, and as politics (Argyris et al., 1978b). Organizational learning is comprised of both behavioral and cognitive processes. Fiol & Lyles find this distinction persists within the literature and offer resolution by exclusively defining lower-level learning, associations formed under repetition of past behaviors, and higher-level learning, the development of new rules and associations regarding new actions (Fiol et al., 1985). They equate these definitions with Argyris and Schön’s single loop learning, the process of error-and-correction when present norms, policies, or objectives are undisturbed, and double-loop learning, the process of error-and-correction involving their modification (Argyris et al., 1978a).
The dilemma of Governance in Latin America
Dr. José Gpe. Vargas Hernández, Centro Universitario del Sur,Universidad de Guadalajara, México
The last decades of the 20th century have seen the institutions of governance in Latin American countries affected by small macroeconomic achievements and reduced economic growth, and the development of an extremely fragile democracy. The implanting of the new model of neoliberal state consolidation has come at high cost, and has not produced either the expected strengthening in the political, economic and social spheres, or the expected gains in efficiency, equity and freedom. This so-called economic liberalization has generated institutional instability in the structure and functions of the state, limiting the reaches of democracy and legality, and ensuring that the effects of the associated managerial orientation which has transformed public administration are largely negative. Looking forward into the 21st century, a pessimistic prediction is that these tendencies will continue, producing similar unstable mixes of democratic populism and oligarchic pragmatism. More optimistically, the Latin American states may come to see that genuine social development is necessary for sustained economic growth, and introduce policies to achieve that outcome. The globalization processes surprised Latin American countries because they didn't have the political-economic mechanisms and the necessary institutions to assimilate its effects in such a way as to achieve social justice in the distribution of the wealth that was created. The challenges posed for Latin America by globalization require a further revision of the romantic utopias that came first with the Bolivarian independence of the early 1800s and subsequently with several popular revolutions in various parts of the region. Whatever its benefits, globalization clearly has perverse effects. The 100 biggest transnational companies now control 70% of world trade, although a significant relationship does not exist between the growth of world trade and world gross product. The volume of the financial economy is 50 times more than that of the real economy. Most significantly for present purposes, the market value of the 1000 biggest companies ($US23,942,986 million) is equivalent to 11.8 times the gross internal product of all the Latin American countries, and the market value of General Electric alone ($US520,250 million) is equivalent to the gross internal product of Mexico. Any one of the 23 most powerful multinationals has superior sales to what Mexico exports. Again, the value of 9,240 commercial coalitions and acquisitions throughout the world in 1999 reached $US2,963,000 million, compared with the annual gross internal product of all the countries of Latin America and the Caribbean calculated by the World Bank to be $US1,769,000 million (3). A brief survey by Lazcano (2000) of the impact of globalization on the pattern of development in Latin American countries identifies several other outcomes: economic dependence on the exterior, particularly the United States and the European Union, has deepened; financial crises, devaluations and bank rescues have concentrated the wealth in less than 10% of the population; economic growth has slowed, productive plant has been destroyed, and underemployment and unemployment have increased; the northwards flow of Mexican (and other Latin American) workers (ie to the United States) has increased; privileges have been granted to foreign capital in relation to the financial system and the servicing of the external and internal debt; economic integration has been towards the outside, with economic disintegration internally; the possibility of a sustainable development pattern and the range of options in economic policy-making has been reduced. In all these ways the social fabric of Latin American countries has been disrupted, the income of the general population reduced, local wealth transferred to the exterior, poverty levels expanded, and indigenous inhabitants excluded from the social pact. The role of the state, and of the public sector which most directly supports and serves that role, is central to each of the three big challenges Latin America faces as it enters the new millennium. These challenges affect the economic, the social and the political spheres respectively, although of course there are many connections between them. The first is to achieve sustainable economic growth within the market economy; the second, to achieve fair and equal distribution of available income; and the third, to remove the obstacles that block development of state institutions that will allow a higher degree of democratic participation in governance. Sadly, it seems that the implementation of policies that reduce social inequalities enters into conflict with the logic of capital accumulation. Thus the privatization of public enterprises and the associated destruction of productive chains have together resulted in a growth of unemployment and an increase in numbers below the poverty line. The lack of appropriate employment opportunities is one of the main concerns of Latin Americans at the turn of the century (Duryea & Székely, 1998). Not enough employment is generated, and only a few individuals have access to well remunerated work. The economic cycles of Latin America in the 1990s have allowed an average growth rate of 3.2% that has achieved little for the poorest sections of the population. The growth rate has slowed in the last few years, and this is likely to continue into the new century due to the pressures of globalization as described above. Financial crises continue, making it difficult to maintain macroeconomic stability. These negative results widen social dissatisfaction and lead to social protest demonstrating wide dissent against the newly adopted economic policies. The inability of governments to overcome such problems points to the lack of appropriate governance arrangements. In some areas, the rule of government is virtually absent, and chaotic situations have arisen marked by mass illegality and barbarism. Governance, in terms of the capacity of the state to solve the problems of society, is reduced to arrangements among different political-elite groups. Poverty and inequality are of course not new in Latin America, and their earlier manifestations have been explained as the result of a pattern of Iberian colonization (Pinto & Di Fillipo, 1979). While this doesn't explain why the former colonies of the British, French and others in the Caribbean and Latin America also have much poverty, Yañez (2000) discerns an Iberian institutional plot that favoured the formation of economies with high transaction costs, ill-defined property rights and incomplete markets where inequality and exclusion are the norm. As former Spanish President Felipe González put it, the first challenge for the prevailing Latin American economic pattern is to put an end to poverty, whose continuing existence explodes the neoliberal economic model (Sosa Flores, 2000). It is broadly acknowledged by academics and intellectuals that the Latin American social structure is a "pigmentocracy" whose peak is represented by the direct descendants of the Spanish aristocrats, of tall stature and clear skin, well educated and owners of the production factors of earth and capital. At its base are placed the direct descendants of the indigenous population, lower in stature and with dark skin. Between these two strata is the big band of mestizos or mixed-bloods. The Spanish settlers used military force and the powers of the state to assure their economic and political dominance over the lower-strata majorities (Chua, 1998: 17-18). The persistence of this social stratification until the present time is one of the causes of social exclusion and it constitutes a serious problem for good governance: the dominant social stratum owns the major corporations and the main means of production. But the market is n:ot the source of its dominance, and so the new competitive atmosphere of globalization could be its tomb. Equally, marketization can open up opportunities for the disadvantaged groups that previously had no opportunity to participate actively in the economy. Globalization certainly imposes pressures, but many of the sources of poverty are internal to Latin American society: lack of knowledge, education and science, lack of capital equipment of all types, lack of incentives for individual action (except for those in big government or big corporations), lack of institutions that protect people’s lives and their property, generally absence of the “rule of law”, and often predatory governments. Public policies are needed to address all these issues.
Changes of Economic Environment and Technical & Vocational Education in Korea
Dr. Namchul Lee, Korea Research Institute for Vocational Education & Training, Seoul, Korea,
Dr. Ji-Sun Chung, Korea Research Institute for Vocational Education & Training, Seoul, Korea,
Dr. Dennis B. K. Hwang, Bloomsburg University, Bloomsburg, PA
The purpose of this paper is to investigate the changes of economic environment and in technical & vocational education since 1985. This paper provides the basis of annual updates and identifies trends of implemented policies in the field of technical and vocational education. In addition, this paper is intended to provide useful information about the current status and future direction of technical vocational & education in Korea for government policy makers and school educators. Expected changes in the industrial structure of the nation would require changes in the emphasis and weight-age given to the technical vocational education (hereafter TVE) system in Korea. The Korean economy has been transforming from manufacture industry to a knowledge-based structures owing to the continuous development of new technology, especially information and communication (OECD, 1996). These trends have two important implications for technical and vocational education programs. They signal an ongoing shift in the education and training fields that are required of the Korean workforce as well as shifts in the levels of the education and training. TVE programs that prepare students for knowledge-based manufacturing jobs include high technology, medium-high technology, information communication technology (ICT), finance, business, health, and education. In Korea, the TVE programs under the formal education system are provided at both high schools and junior colleges (Ministry of Education, 2001). In this paper we aim to review literatures and statistical data on this topic by studying changes of industrial structure, labor force participation, and TVE programs since 1985. Understanding labor market trends provides a context for analyzing trends in TVE. For example, if participation in TVE programs parallels changes in the economy, one would expect to see a decline in enrollments in agriculture and manufacturing programs in recent years and an increase in enrollments in service and information communication technology related programs. The major purpose of this paper is to provide a picture on the basis of annual updates and to identify trends of implemented policies in the field of TVE. Also, this paper provides a working tool for analysis and policy-making in the field of TVE. In addition, this paper is intended to serve as a working tool for policy analysis and policy formulation for the policy makers in the field of TVE. The remainder of this paper is organized as follows. Section 2 briefly reviews the literature in the relationship between employment and changes in the industrial structure in Korea. Section 3 presents the major TVE trends in terms of enrollments in vocational high school and junior college. Section 4 shows the TVE outcomes by employment. Final Section presents the policy strategy in Korea. Policy makers and technical & vocational educators need information about the status and direction of TVE in Korea. To meet their needs, this paper is intended to provide answers for the following questions: First, how large is the TVE enterprise at both vocational high school and junior college levels, and is it growing, shrinking, or holding constant over time? Second, who participates in TVE, and is this changing? Third, what are the major national economic and labor market trends and their implications for TVE programs and policies? Fourth, what are the junior colleges and labor market outcomes related to participation in TVE? The key goal of the Korean TVE programs in to prepare students with adequate knowledge and skills for the work force and management caliber in various industries after their graduation. Therefore, the TVE programs should theoretically and practically be modified in order to meet the dynamic changes in the business environment and labor market. A study of employment and its changes in the industrial structure will enable us to understand the changes in Korean economic structure and the demand shift of labor market industries. In 1979, the employment of agriculture and fisheries industries comprised of 35.8 percent of all the industries, while the mining and manufacturing industry was 23.6 percent and service industry was 40.6 percent respectively of the GDP. However, in 2000, the ratio of agriculture and fisheries employment was reduced to 10.9 percent, while the manufacturing and service industries increased to 20.2 percent and 68.9 percent, respectively (NSO, 2001). The employment in the primary and manufacturing sector decreased by 24.9 percent and 3.4 percent, respectively in 1979 to 2000, while service sector increased 28.3 percent during the same period. Total employment in the manufacturing has constantly declined since 1990. In contrast, employment in the services sector has continued to grow, although it has slowed somewhat in recent years. However, within industrial sectors, knowledge-based industry sectors perform well above other sectors in terms of job creation (Bank of Korea, 1998). Korea is shifting a manufacturing-based economy to a service/technology–based economy. The implication of this shift for the TVE programs is that the TVE curricula should be adequately modified to prepare. The programs should include ICT, finance, insurance, business services, and transformation. The participation rate of women in the labor market among women increased from 26.8 percent in 1960 to 48.3 percent in 2000, signifying an increased of 21.5 percent during the period. Also, the ratio of female labor participation rate to male participation increased from 36.5 percent in 1960 to 65.2 percent in 2000 (NSO, 2001). There has been a dramatic change in labor supply in the number of working women in the Korea. Women were added to the Korea's work force during the 1980’s three times faster than men. These additional working women have made great contribution to increase family income. Although the number of Korea women aged 16 and older increased by only 5 percent during the 1980’s, the number of women in the labor force increased by 20 percent. The percent of all women of 16 years and older in the labor force increased from 46 percent in 1970 to 53 percent in 1996. However the labor force participation rates for men are higher than those for women at any time and at every age group. In general, the rates for women have been rising and the rates for men have been declining. Although labor force participation rates for specific groups change over time, the overall pattern is fairly consistent across age groups and sexes. Labor force participation is relatively low for young persons (aged 15 to 19) because of school or child care responsibilities. It rises during the working years, ages 20 to 59, and also increases over age over 60. The participation rate for persons aged 25 to 29 increased 23.9 percent; for those aged 30 to 39, the rate increased 17.3 percent; and for those aged over 60, the rate increased to12.9 percent from 1980 to 2000. The largest increases in labor force participation rates among aged group was for those aged 25 to 29. While the participation rate for persons aged 15-19 decreased 21.9 percent during the same period.
An Exploratory Analysis of Customer Satisfaction
Dr. Turan Senguder, Nova Southeastern University, Ft. Lauderdale, FL
Satisfaction is the consumer's fulfillment response. It is a judgement that a product or service feature, or the product or service itself, provided a pleasurable level or consumption-related fulfillment, including levels of under or over fulfillment. Here, pleasurable implies that fulfillment gives or increase pleasure, or reduces pain as when a problem in life is solved. Dissatisfaction is the displeasure of underfulfillment can be dissatisfying. It is well known among marketers of "style" goods that one purpose of new products is to create dissatisfaction with the prevailing style - a common strategy of automobile companies through the release of new models. A first-time consumer: Imagine a consumer with no experience in buying a particular product. Having an interest in its purchase, the consumer might read advertisements and consumer guides to acquire information. This information, usually regarding benefits the product will deliver, provides the consumer with expectations about the products likely performance. Because a number of suitable alternatives are available, this consumer must choose among them. Thus choosing one alternative requires that consumer forgo the unique feature of the others. This creates two problems. First, the consumer may anticipate regret if the chosen alternative does not work out as well as other choices might have. Second, until the consumer has consumed, used, or sufficiently sampled the product (as in driving a car over a period of time), an apprehension or tension, known more commonly as dissonance, will exist over whether the choice was best. Once the product is used and its performance evident, the consumer is in a position to compare actual performance with expectations, needs or other standards resulting in an expectation-performance discrepancy. Having purchased a product previously, the consumer has probably developed an attitude toward it. Here an attitude is fairly stable liking or disliking of the product based on prior experience. It is also possible that an attitude can develop based on prior information without experience, as when consumers develop biases for or against brands based on the brand's images in the marketplace. This attitude now forms the basis for the consumer's expectation in the next product encounter. It is likely that the attitude is tied fairly strongly to the consumer's intention to repurchase the product / service in the future. Specifically, prepurchase thought centered on ideal characteristics and specific attributes of the between-brand variety, whereas postpurchase thoughts were more abstract, focusing on the goal outcomes of the purchase. Two very helpful structured approaches may be used to assist the researcher in identifying key satisfaction determinants from consumer responses. The first approach was actually designed to understand how consumer processes information in buying and using rather than to discover how consumer form satisfaction and dissatisfaction judgements. A good operational example of the standardized framework is SERVQUAL in the context of service delivery. The instrument is designed to predict service quality, not Increased customer satisfaction has immediate consequences for customer and attitudes. These effects include a decrease in informal complaints to retailers or service providers, a decrease in informal complaints to retailers or service providers, a decrease in formal complaints to management, an increase in customer repurchase intentions, positive word of month, and a decrease in sensitivity to price increase, all of which may be generalize into two key consequences: customer loyalty and complaint behavior. Loyalty is distinct from customer retention. Loyalty is a psychological predisposition toward repurchase; customer retention is the act of re-purchase. The primary way to manage customer retention is through a process of constantly finding new and better ways to satisfy customer needs, thereby leveraging customer loyalty into repurchase behavior. This includes strategies investment in new technology, developing revolutionary products and services, and controlling cost to constantly provide customers with the highest value. The biggest differences between the performance model and the disconfirmation model concern the role of expectation.
A Perspective on Team Building
Dr. Jean Gordon, Barry University, Miami, Florida
The way we work is changing. Middle management has been reduced to its lowest level and organizations are flattening their structures causing generation of new business process methodology. Many of the changes being experienced are a result of restructuring, mergers and acquisitions, global competition and changing work trends. Teams are becoming more of the norm in this new workplace and teams are seen as one way of leveraging organizational strengths to offset new challenges. Research has shown that team building takes time and effort to produce systematic, lasting results. Furthermore, teams are beginning to change the way workers work and organizations are beginning to realize that a “we” culture may better suit business needs than the traditional “I” culture. This research seeks to outline the characteristics of successful teams and to expand on theorists who believe that knowledge of how people work will enhance team-building efforts. The purpose of this paper is to generate new ideas on team building while expanding on existing research and processes. Let us begin with a definition of “team” in the context of the workplace. Teams can be defined as small groups of people committed to a common purpose, who possess complimentary skills and who have agreed on specific performance goals for which the team holds itself mutually accountable (Katzenbach and Smith). Effective teams must have individuals with complementary skills in order to meet ever-changing needs of both internal and external customers. Further, effective teams must have specific goals to strive for which allow mutual accountability. Finally, teams should be composed of a small number (preferably an odd number, i.e. 5 or 7) of people to ensure consensus without discord. First, let’s look at what makes a team successful. The Pfeiffer Book of Successful Team-Building Tools (Biech, p. 13-26) gives ten (10) characteristics of successful teams: Clear Goals – allow everyone to understand the function and purpose of the team. Defined Roles – allow team members to understand why they are on the team and enables clear individual and team-based goal setting. Open and Clear Communication – considered the most important aspect of team building, effective communication hinges on effective listening. Effective Decision Making – for a decision to be effective, the team must be in agreement with the decision and must have reached agreement through a consensus finding process. Balanced Participation – ensures that all members are fully engaged in the efforts of the team. Participation is also directly linked to leader behaviors. Effective team leaders should not see their role as authoritarian and should strive to be seen as the team’s mentor or coach. Valued Diversity – the team must recognize each member’s expertise and value variety of knowledge, skills and abilities. In the world of teams, diversity is larger than just race or gender. Managed Conflict – all team members should feel free to announce their point of view without fear of reprisal. For teams, managed conflict is almost akin to brainstorming in that conflict allows the team to openly discuss their ideas and decide on common goals. Positive Atmosphere – a climate of trust must be developed. One way of developing trust is to allow team members to come together in a positive atmosphere. Allowing team members to become comfortable with one another will generate a positive atmosphere, leading to enhanced creativity and problem solving. Cooperative Relationships – team members should recognize that they need one another’s knowledge and skill to complete the given task(s). Participative Leadership – includes having good leadership role models, as well as, leaders who are willing to share responsibility and recognition with the team. Given these characteristics of successful teams, it really is not surprising that in the quest for teamsmanship organizations must be prepared for a long and often difficult journey. Part of the difficulty lies within the team itself. Teamwork goes against the grain of our humanity and, for now anyway, few people independently chose to be a team player. In 1985, Dr. Charles Margerison and Dr. Dick McCann founded The Institute of Team Management Studies (TMS), whose goal is to identify ways of enhancing effective teamwork. Drs. Margerison and McCann have performed extensive research in the area of successful team-building and have focused their research on identifying and understanding key work elements that prove a reliable and valid focus in explaining why some individuals, teams, and organizations perform, work effectively and achieve their objectives, while others fail. TMS was founded in Australia and has offices worldwide. Since its inception, TMS has served the needs of diverse clients including; Allstate Insurance, BP Exploration, Hewlett Packard, Nestle USA, Monsanto and many others. TMS focuses on the people side of effective management. They have worked in the areas of business process re-engineering, career planning, outplacement, performance review, team building and team start up. Their website (http://www.tms.com.au) is very informative and gives visitors the opportunity to join TMS’ e-newsletter for updates on new developments. TMS’ specialty is helping its clients understand their employees through the use of personality trait indicators. These traits are measured using an array of assessment tools that TMS tailors to the specific needs of clients. This report will focus on The Team Management Index (Profile), one of six Profile Assessments (Team Management Index, Types of Work Index, Linking Skills Index, Team Performance Index, QO2TM, and Influencing Skills Index) that TMS has developed. This report will also graphically illustrate assessment results via the Team Management Wheel and the associated “Linker” Circle. The Team Management Index (Index) is a profile assessment tool that TMS has developed following the Law of the Three P’s. People practice what they prefer and therefore perform better in those areas that match their preference. According to TMS, understanding work preferences is critical to the development of individual, team and organizational performance. The Index is a sixty (60) question assessment that focuses on understanding how an individual approaches work assignments. Completion of the Index reveals an individual’s major and minor work preferences. These preferences include individual strengths, decision-making skills, interpersonal skills, and team building skills. Norm data is included for comparison. According to TMS, the Index has been used in diverse applications including team and leadership development. The Margerison-McCann Team Management Wheel (Wheel) is one of the many graphical assessment tools that TMS has developed to help its clients interpret results of the Index assessment profile. The Wheel seeks to define member roles and is based on the following eight (8) individual team member roles: Reporter-Advisor: Those who prefer work involving gathering and sharing of information. Supporter, helper, collector of information, knowledgeable, flexible. Creator-Innovators: Those who prefer work that generates and experiments with new ideas. Imaginative, creative, enjoy complexity, future-oriented. Explorer-Promoters: Those who prefer work that involves investigation and presentation of new opportunities. Persuader, influential and outgoing, easily bored. Assessors-Developers: Those who prefer work that involves planning to ensure that ideas and opportunities are feasible in practice. Analytical and objective, idea developer, experimenter. Thruster-Organizers:
Factors Affecting Customer Loyalty in the Competitive Turkish Metropolitan Retail Markets
Dr. Altan Coner, Yeditepe University, Istanbul, Turkey
Mustafa Ozgur Gungor, Yeditepe University, Istanbul, Turkey
Dynamic behavior of markets, adaptation to diverse social segments and flexibility needed for each individual consumer are the challenges marketing face today. One of the main drivers of this one-to-one marketing era is the increasing capability that technology renaissance of 1990s brought forward, and the other is the evolution of customer relationship management (CRM). The need to build effective relationship with the customer became more essential for the businesses to remain competitive. Therefore, CRM is implemented as a combination of the managerial and marketing issues with detailed collection and analysis of customer related data. Moreover, the importance of the understanding of behavioral moves of customer to develop a better relationship in response to the need to keep them satisfied brings forward customer loyalty management as a critical issue for any business acting in highly competitive markets. This paper is a presentation of the findings of a research focused on the examination of the factors acting on customer loyalty. The research has been carried out in the metropolitan Turkish retail markets where intense rivalry exists. After it was introduced in the 1960s, the paradigm discussing the concepts of marketing mix and the four Ps of marketing – product, price, place and promotion – became the major arena of the marketing science for the last several decades. This mainstream was followed by many researchers and developed in various dimensions after its first introduction. The detailed discussions about each P were covered in many titles (McCarthy, 1960; Kotler, 1991; Boone and Kurtz, 1995; Kotler and Armstrong, 1999). Kotler (1991) detailed one of the most comprehensive coverage of these four Ps in depth around 1990s. Kotler explained this fundamental four P model by using services and communication marketing terminology with additional key aspects of this theory. However these aspects were extensively proposed in the formulation by Borden (Grönroos, 1994) long before Kotler and summarized in the final model. Kotler, by using the generalized model, discussed contemporary marketing issues in the interdisciplinary manner. This model found the most popular implementation methodologies and was widely applied in the 1990s after Kotler’s additions. On the other hand, the burden theories of marketing were totally out of the scope of the business world until the Internet era splashed. E-Business became the new paradigm in the life span of the industries, and businesses and individuals who interact, are related, mobilized, and customized. Although these terms were not new to the marketing, the rapid change of the media, change of the production processes, and change of the nature of the distribution challenged the four Ps. The philosophy behind these changes pointed out by different theories of marketing can be encapsulated into the marketing mix and the four Ps. Dynamic interaction, integration of consumer behavior and managerial decision making were more important research topics for the enhancement of this model. Some of the key factors of leading this production-oriented four Ps model to more customer orientation included adaptability, flexibility and responsiveness. According to Grönroos (1994), “An interest in turning anonymous masses of potential and existing customers into interactive relationships with well-defined customers is becoming increasingly important.” As a compromise, some emerging models in marketing were developed like network approach, customer relationship marketing, one-to-one marketing, and long-term profitability of customer retention. One of the first applications of this kind was the “Interactive Marketing”, which is used to cover the marketing impact on the customer during the consumption and usage stages where the consumer interacted with employees and entities of the service provider. Another application was “customer relationship life-cycle model” which is used to imply the long-term nature of the establishment and evolution of the relationship between a firm and its customers. Heskett (1987) introduced the third one called “Market Economies” where he discussed the requirement to understand customers instead of concentration on developing scale economies. This approach was later complemented by Reichheld (1993) as he concluded that long-term relationships where both parties learn over time how to best interact with each other might lead to a decrease in the relationship costs for the customer as well as the producer or the service provider. Finally, “Customer Loyalty” was brought forward when an intimate relationship between the producer and the customer through the services bundled with the product gained an important significance. Not only actors in highly competitive markets but also actors of monopoly markets were providing new interactive services for the customer to attain higher rates of repurchases. Therefore, customer loyalty is a profound solution for the argument in the mutually satisfactory relationship that makes it possible for customers to avoid significant transaction costs involved in shifting supplier or service provider and for suppliers to avoid suffering uncertainty of sales and unnecessary quality costs. Moreover, the presence of substantial information asymmetries about markets and products, increasing product variety and complexity, continuous change in the long-term nature of many products, the lack of time, and the challenge about quality uncertainty are bearing relatively high degree of risk for the customer. Customer wishes to be loyal to a trusted retailer for the minimization of this risk. As defined by Grönroos (1994), “Customer Loyalty is the marketing approach of relationship marketing, ie. establishing, maintaining, and enhancing relationships with customers and other partners, at a profit, so that the objectives of the parties involved are met. This is achieved by a mutual exchange and fulfillment of promises.” Customers have different levels of loyalty and different attributes attract them under various constraints. Dick and Basu (1994) classified loyalty into four major categories (Table 1.1). Those categories were discussed in detail and formalized by Gabbott and Hogg’s (1998) review about loyalty which suggested that bonding arrangements between the parties could act as a form of “glue” and they proposed six forms that can be classified as: goal compatibility, trust, satisfaction, investment, social and structural bonding. On developing this theme of bonding and relationships Gifford (1997) conceptualized seven attributes of brand relationship: quality as love and passion, self-concept connection, interdependence, commitment, intimacy, partner quality and nostalgic attachment. According to the levels of communication effectiveness implying bonding forms and brand association, each person in previous level could be carried to more loyal level.
Dr. Charles A. Rarick, Barry University, Miami, FL
The Correlative Relationship between Value, Price & Cost
Dr. Richard Murphy, President, Central Marketing Systems Inc., Ft. Lauderdale, FL
A consumer goes to an electronics store to purchase a new television set. The consumer spends almost an hour listening to the salesperson, looking at and comparing different models. The consumer selects a model priced at $585. Did the television cost the consumer $585? Many people would answer "yes" because that was the price of it. But, there is a difference between the actual dollars charged as the price and the cost to the consumer. That customer had time and energy involved in the purchase in addition to the number of dollars paid. The cost to the consumer, then, must include all the resources that were used to make the purchase. Today's consumer is bombarded with advertisements in all media, direct mail offers, and telemarketing offers for telephone long distance service. One of AT&T's ads boasts a rate of 7 cents per minute for long distance. The price is 7 cents but that is not the cost. Whether the ad is a commercial on television or an ad in a print document, there is a small caveat printed - the consumer will be billed a monthly charge of $5.95 per month if they sign up for this long distance rate (Teinowitz, 1999). The actual cost to the consumer is a good deal more - $5.95 per month plus the 7 cents per minute. This is the difference between the price and the cost. We take this concept one step further in this paper – What was the value? Did the value equal the cost? There are numerous factors involved when we begin to discuss the issues of value, cost and price. The value of anything is perceived by the customer, not the manufacturer or the vendor. Value is an abstract construct that the consumer determines based on a number of factors. The degree of risk in the purchase is also a factor in perceived value. Consumers must perceive that they receive a higher value from one vendor or from one product than another in order to purchase it. The cost includes the actual price of the product or service but it also includes the 'hidden' costs, such as the time it takes to travel to the store or the time it takes to complete the transaction. The following pages more fully discuss the issues of cost, price and value. The marketing mix includes those variables that the marketing department can control in advertising a product or service. It is intended to convey to the consumer the value to them if they purchase this product or service. When this concept was first designed, it was called the 4Ps – product, place/distribution, pricing and promotion (Dennis, 1999). They represent the marketers’ bag of tools, an armory that can be manipulated to gain a competitive advantage over competitors (Carson, 1998; Dennis, 1999). As time has passed many suggest that the 4Ps should be changed to the 4Cs (Dennis, 1999). The reason is that the 4Ps were devised in the industrial age when there was a focus on the product but in today's world, the focus is on the consumer (Carson, 1998; Dennis, 1999). In other words, marketers need to be customer-oriented rather than product or company-oriented. The 4Cs gives us a beginning understanding of how a company conveys value to the consumer. The 4Cs are: Customer Value: What is the value of the product, what benefits would the buyer gain? Cost to the customer: As we began to explain above, what is the actual cost, this is equal to the price of the time and other costs the customer experienced to buy the product, e.g., travel to the store, time spent looking at the product or standing in the line (Dennis, 1999). Price is nothing more than an optimal economic number while cost is a social scientific construct that has to do with the customer's perception of how much it really cost them to buy the item (Carson, 1998). Convenience for the buyer: This has to do with channels of distribution – how convenient is it for the consumer to purchase this product (Dennis, 1999). Communication: Marketing can no longer be confined to a one-way communication mode whereby the company tells the consumer about the product, there must be two-way communication (Dennis, 1999). The only way to know how customers perceive their costs or the value of the product is by asking them, which involves a dialogue in some way (Carson, 1998), even if it is a survey. This has to do with relationship marketing (Carson, 1998). The marketing mix provides a starting place for the company when marketing any product, be it new or a continuing item. These are the factors that must be considered if the company is going to have a successful marketing campaign. These are not the only factors to consider but when each is taken to its extreme, each will include nearly all of the different aspects of marketing campaigns. Setting prices for any product or service is obviously a critical decision for the company. Schofield calls it "one of your most important and challenging responsibilities" (1999). The company must make a profit - that is a given, no profit means no more company. But, the company must also determine a price for the product that helps the company prosper but that is also attractive to the consumer. This requires a calculation that includes the costs the company incurred to develop, produce and then sell the product. The cost is the "sum total of the fixed and variable expenses to manufacture or offer your product or service" (Schofield, 1999). Fixed costs include things like rent, insurance, office equipment, utilities, salaries for executives, property taxes, depreciation (Schofield, 1999). Variable costs include things like raw materials used to make the product, hourly wages, benefits, warehouse and shipping costs, commissions to salespeople, advertising and promotion (Schofield, 1999). Variable costs change depending on the amount of goods that are produced (Schofield, 1999). Thus, fixed costs are those things that must be paid every month, or on whatever other regular schedule they are due and variable costs are those that vary, or change, depending on what product is being produced (Sifleet, 2002). Variable and fixed costs need to be totaled and included in the cost of developing the product. The price is set somewhere between the actual cost of producing the product and the ceiling, which is the highest price that could be set for the item (Schofield, 1999). The break-even point must be established, in other words, what must be charged in order for the company to just break even between the revenue obtained for the product and the expenses of producing it (Schofield, 1999). Sifleet offers an example of an analysis to determine the break-even point in a training consulting firm (2002). Their intended outcome from the analysis is to determine the appropriate rate to charge per hour for consultations (Sifleet, 2002). They begin by totaling all the fixed costs and arrive at a sum of $30,000 per year (Sifleet, 2002). The variable costs include instructor's pay at $15 per hour (Sifleet, 2002). They then graph the costs for different amounts of billable hours per year (Sifleet, 2002). They also graph projections of revenue that is based at three different hourly rates for fees: $30 per hour, $35, and $50 per hour (Sifleet, 2002). In order to be profitable the company must generate more revenue than their costs and from the graphs, they find that at $30 per hour, the business will have to generate at least $60,000 revenue in a year to break even, i.e., to simply cover their costs (Sifleet, 2002). They further calculate that to generate the $60,000, they will have to have 2000 billable hours (Sifleet, 2002). Taking this further, they determine how many hours will have to be billed each week and find that with a 50-week work year, 40 hours each week must be billed just to break even (Sifleet, 2002). Thus, the $30 per hour fee will not work; it is not realistic for the company because the only way they can gain a profit is by scheduling far more than 40 hours per week (Sifleet, 2002). Further calculation tells the company that at $35 per hour, they need to schedule 30 hours per week and at $50 per hour, they need to schedule 17 hours per week (Sifleet, 2002). Remember, these are the break-even points, just covering their costs. These calculations demonstrate that the floor price is $35 hour but to make a profit, they are going to have to set a fee of $50 per hour. So, the floor price is the break-even point and the ceiling price is what the market will bear, the highest price the consumer would pay (Sifleet, 2002). The appropriate price is somewhere in between these two extremes. The price must be high enough for the company to grow and low enough to be attractive to consumers.When pricing anything, the other factor that must be considered is the value of the product to the consumer. As we already stated, “consumers will pay a higher price for things they perceive to hold significant value for them” (Schofield, 1999). Sifleet brings in the factor of perceived value into the equations, also.
The Accounting Crisis as a Result of the Enron Case, and Its Implications on the U.S. Accounting Profession
Dr. Dhia D. AlHashim, California State University, Northridge, California
In a free-enterprise economy, the integrity of the economic system is crucial to investors’ confidence. Lately, with the discovery of so many business scandals investors’ confidence in the corporate system and the accounting profession has eroded. The purpose of this research is to investigate reasons for the recent business scandals, particularly that of Enron Corporation, the impact on the U.S. accounting profession, and the lessons learned for developing nations. On August 14, 2001 Mr. Jeffrey Skilling, CEO, resigned from Enron; on November 8, 2001 Enron restated its earnings for the years 1997 through 2000. On November 30, 2001 Enron filed for bankruptcy protection. Enron wiped out $70 billion of shareholder value, defaulted on tens of billions of dollars of debts and its employees lost their life savings (pension plan consists of Enron’s stock). The question is: Why Enron collapsed? There is only one answer, in my opinion, and that is: derivatives! A major portion of these derivatives relates to the now infamous “Special Purpose Entities (SPEs).” Enron Corporation was one of the pioneers of energy deregulation and became a major force in the trading of energy contracts in the U.S. and overseas markets. Last year, the company was considered the seventh largest company in the U.S., with revenues exceeding $150 billion and assets of more than $60 billion. It handled about one quarter of the U.S.’s traded-electricity and national gas transactions. However, it appears that Enron’s success was not entirely due to the brilliant business strategies developed by its former chairman Ken Lay. As the unraveling scandal shows, a significant portion is attributable to innovative financing and accounting strategies. There is no question that the continuation of deregulation of the economy and the privatization of services depends on the integrity of financial reporting systems. Integrity can be achieved by having a fair and transparent accounting system. It is alleged that accountants are compromising their integrity, by manufacturing company’s earnings, for the sake of obtaining a piece of the act! Observing unusual business events recently, leads us to the conclusion that it is not only Enron who is manufacturing earnings and hiding debts in subsidiaries and partnerships, with help of their accountants, many other U.S. companies are hiding trillions of dollars of debt in off-balance-sheet subsidiaries and partnerships, such as UAL ($12.7 billions), AMR-parent of American Airlines ($7.9 billions), J.P. Morgan Chase ($2.6 billion), Dell Computer ($1.75 billion), and Electronic Systems ($0.5 billion). This research investigates the impact of these recent business scandals, particularly that of Enron Corporation, on the U.S. accounting profession, with possible lessons learned for developing countries. Enron’s goal of becoming “the world’s greatest company” required a continuous infusion of cash. This in turn demanded favorable debt/equity ratios and high stock prices. To accomplish these goals, under the leadership of its former chief financial officer (CFO) Andrew Fastow, Enron developed an increasingly complex financial structure and utilized a bewildering network of partnerships and SPEs To generate the cash, Enron formed a new SPE: Chewco, consisting of Enron executives and some outside investors (see Exhibit1). To take advantage of loopholes in generally accepted accounting standards in the U.S., companies establish SPEs, by having outside investors contribute 3% of capital of these SPEs so that they can be considered independent and off the balance sheets for those corporations who contribute 97% of the invested capital! By creating these SPEs, Enron was no longer required, per U.S. generally accepted accounting standards, to include in its financial statements the assets and the liabilities of the SPEs of which it owned 97%!. Enron, thus, funneled from its balance sheet a substantial amount of liabilities, and eliminate from its income statement hundreds of millions of dollars of expenses and included false gains on its speculative investments in various technology-oriented companies. The net impact of these practices was the creation of financial powerhouse façade that mislead investors. Enron may have been just an energy company at its inception in 1985, but by the end it had become a full-blown OTC derivatives trading company. Its OTC derivatives-related assets and liabilities- increased more than five-fold during the year 2000 alone. Since OTC derivatives trading is beyond the purview of organized, regulated exchanges, Enron fell into a regulatory black hole! Enron collapsed because of the derivatives deals it enter into with its more than 3,000 off-balance sheet subsidiaries and partnerships-such as JEDI, Raptor and LJM. Derivatives are complex financial instruments whose value is based on one or more underlying variables, such as the price of a stock, interest rate, foreign exchange rate, index of prices or rates, commodity price (the cost of natural gas), or other variables. The size of derivatives markets typically is measured in terms of the “notional amounts (a number of currency units, shares, bushels, pounds, or other units specified in the contract).” Recent estimates of the size of the exchange-traded derivatives market, which includes all contracts traded on the major options and futures exchanges, are in the range of $13 to $14 trillion in notional amount. By contrast, the estimated notional amount of outstanding OTC derivatives as of year-end 2000 was $95.2 trillion, which represents about 90% of the aggregate derivatives market, with trillions of dollars at risk every day. Derivatives can be traded in two ways: on regulated exchanges or in unregulated over-the-counter (OTC) markets. The latter is what Enron capitalized on in dealing with its derivatives, which capitalized on the inaction of the U.S. Commodity Futures Trading Commission and passage of the Commodity Futures Modernization Act by the U.S. Congress in December 2000, under which the U.S. Congress made the deregulated status of derivatives clear.
The Relationship Between Dividends and Accounting Earnings
Dr. Michael Constas, California State University, Long Beach, CA
Dr. S. Mahapatra, California State University, Long Beach, CA
This research examines the relationship between dividends and earnings. The model used here is a variation of the model tested in Fama and Babiak (1968), which has not been altered by other empirical literature. The importance of this model is underscored by Copeland and Weston (1988), Kallapur (1993), and Healy and Modigliani (1990), which was used to examine the influence of inflation in dividend policy. This research, however, differs from the Fama and Babiak model in important respects. The Fama and Babiak model is linear, while the model tested in this research is a linear logarithmic transformation of a nonlinear relationship. The Lintner (1956), and Fama and Babiak (1968) model has an additive error term with a normal distribution, whereas the model tested herein assumes that the underlying relationship has a multiplicative error term with a lognormal distribution. The empirical results reported in this paper reflect an improvement over the results obtained by using the original Fama and Babiak (1968) model. The Fama and Babiak (1968) study involved running separate regressions for each firm. In the revised model (used here), the cross-sectional parameters are significant, and, in both cross-sectional and separate firm regressions, the revised model produces higher adjusted R2s than is produced by the Fama and Babiak model. The Fama and Babiak (1968) model is based upon the premise that a firm’s current year’s dividends reflect its current year’s earnings. The prior year’s dividends are subtracted from both the current year’s dividends and earnings in order to produce the change in dividends as an independent variable. The empirical results reported here, however, suggest that the presence of the prior year’s dividends as an independent variable is an important part of the relationship between dividend changes and earnings changes. Current dividends appear to be adjusted when a firm experiences earnings that are inconsistent with prior dividend declarations. This adjustment can be explained in two ways. First, it may be that a firm readjusts its dividends when it experiences inconsistent earnings because its ability to pay dividends has changed. Second, the adjustment may be due to the fact that dividends serve as management’s signal as to how the firm is expected to perform in the future, and this signal changes due to new information. If the second explanation were to be correct, dividends would offer important information regarding management’s expectations of future earnings of a firm. The model developed in this section has strong similarities to, but important differences from, the model tested in Fama and Babiak (1968), which was based upon a model developed in Lintner (1956). The following terms and the meanings set forth opposite them below: As noted in Fama and Babiak (1968), the dividends declared during any year by a firm reflect the earnings of that firm for the current year: The Fama and Babiak (1968) model assumed that a linear relationship between (dit+1/dit) and (eit+1/dit) with an additive error term, similar to the following: The relationship between (dit+1/dit) and (eit+1/dit) also could be structured as a nonlinear relationship with a multiplicative error term, and the error term could have a lognormal distribution. If this were the case, then a logarithmic transformation of that relationship would produce equation (3.6): The issue of whether to model a relationship using an additive error term with a normal distribution or a multiplicative error term with a lognormal distribution is discussed in Judge, et. al. (1980, pgs. 844-45). Judge suggests that a test outlined in Leech (1975) may be used to determine whether a version of the basic model using an additive error term or a multiplicative error term is more appropriate for a given data set. The Leech test provides that the version of the basic model producing the higher value for the log likelihood function is the more appropriate version of that model. Equations (3.5) and (3.6) were tested using the Leech test, and equation (3.6) produced the larger log likelihood value. A separate OLS regression is run for each firm across all years. In order to be included in this regression, a firm needed to have observations for at least 15 years. To take into account annual differences in dividend payment policies, a variable consisting of the median dividend per share for the sample for the current year (determined prior to the screen for “higher” P/E ratios) divided by the median dividend per share for the sample for the prior year (determined prior to the screen for “higher” P/E ratios) was included.
Trade and the Quality of Governance
Dr. Fahim Al-Marhubi, Sultan Qaboos University, Sultanate of Oman
Different strands of the trade and governance literature imply a link between the openness of an economy to international trade and the quality of its governance. This paper tests this proposition using a newly created dataset on governance that is multidimensional and broad in cross-country coverage. The results provide evidence that the quality of governance is significantly related to openness in international trade. This association is robust to alternative specifications, samples, and governance indicators. The last decade has witnessed an explosion of research on economic growth. Two issues that lie at the heart of this research include the role of international trade and that of governance in promoting growth and better development outcomes. However, due to conceptual and practical difficulties, these two lines of research have run in parallel without explicit recognition of each other. Conceptually, the relationship between openness and governance has been left rather imprecise, with a notable absence of a convenient theoretical framework linking the former to the latter. Practically, the difficulty lies in defining governance. While it may appear to be a semantic issue, how governance is defined actually ends up determining what gets modeled and measured. For example, studies that examine the determinants of governance typically tend to focus on corruption (Ades and Di Tella, 1999; Treisman, 2000). However, governance is a much broader concept than corruption and little has been done to address the other dimensions of governance discussed in the next section. The purpose of this paper is to investigate more systematically the link between the openness of an economy and the quality of its governance. A practical difficulty that arises, however, in trying to estimate openness’ exogenous impact on governance in a cross section of countries is that the amounts that countries trade is not determined exogenously. Openness may be endogenous since it is quite likely that countries that can manage risks and exploit opportunities from trade because of their high quality governance choose or can afford to be more open. Hence, better governance can lead to greater openness rather than the other way round. As a result, correlations between openness and governance may not reflect an effect of trade on governance. In estimating the impact of openness on governance, what is needed is a source of exogenous variation in openness. Using measures of countries’ trade policies in place of (or as an instrument for) trade will not solve this problem since countries that have better governance may also adopt free-market trade policies. To cope with this problem, this paper estimates trade’s impact on governance by instrumental-variable estimation using the component of trade that is explained by geographic factors as an instrument for openness. This instrument, constructed by Frankel and Romer (1999), exploits countries geographic characteristics (countries’ sizes, populations, distances from one another, common border or not, and landlocked or not) as a source of exogenous variation in trade. The suitability of this instrument rests on the premise that geography is an important determinant of countries’ (bilateral as well as total) trade, and that countries geographic characteristics are unlikely to be correlated with their governance, or affected by policies and other factors that influence governance. Using a newly created dataset on governance that is multidimensional and broad in cross-country coverage, the results indicate a significant positive relationship between the openness of an economy and the quality of its governance. This association is robust to changes in specification, datasets, and indicators of governance. There have been a number of different attempts at defining governance (World Bank, 2000). Despite the absence of a precise definition, a consensus has emerged that governance broadly refers to the manner in which authority is exercised. Defined in this way, governance transcends government to include relationships between the state, civil society organizations, and the private sector. It includes the norms defining political action, the institutional framework in which the policymaking process takes place, and the mechanisms and processes by which public policies are designed, implemented, and sustained. Frequently identified governance issues include the limits of authority and leadership accountability, transparency of decision-making procedures, interest representation and conflict resolution mechanisms. If governance is difficult to define, it is even more difficult to measure. Empirical studies focusing on either a time series or cross-sectional context have deployed a variety of proxy measures, spanning indicators of civil liberties, political violence frequencies, investor risk ratings, surveys of investors, to aggregation of indexes. This paper relies on the recent definition proposed and the proxy measures constructed by Kaufmann et al. (1999b). Kaufmann et al. (1999a: 1) define governance as “the traditions and institutions by which authority is exercised. This includes (1) the process by which governments are selected, monitored and replaced, (2) the capacity of the government to effectively formulate and implement sound policies, and (3) the respect of citizens and the state for the institutions that govern economic and social interactions among them.” Operating on the principle that more information is generally preferable to less, Kaufmann et al. (1999b) aggregate governance indicators from several sources into an average or composite indicator – a poll of polls. The raw data used to construct the composite governance indicators are based on subjective perceptions regarding the quality of governance, often drawn from cross-country surveys conducted by risk agencies and surveys of residents carried out by international organizations and other non-governmental organizations. Using an unobserved components methodology, Kaufmann et al. (1999b) combine more than 300 related governance measures into six aggregate (composite) indicators corresponding to six basic governance concepts, namely Voice and Accountability, Political Instability and Violence, Government Effectiveness, Regulatory Burden, Rule of Law, and Graft.
Caribbean Economic Integration: The Role of Extra-Regional Trade
Dr. Ransford W. Palmer, Howard University, Washington, DC
This paper examines the feed-back effect of extra-regional trade on intra-regional imports of the Caribbean Community (CARICOM). Because of the non-convertibility of CARICOM currencies, intra-regional trade must be settled in hard currency, typically the U.S. dollar. It is argued that the growth of extra-regional trade generates foreign exchange which stimulates the growth of gross domestic product and intra-regional imports. Over the past thirty years, there has been an explosion of common market and free trade arrangements around the world, all of them designed to foster trade and promote economic growth. NAFTA and the European Economic Community are the two dominant ones. But in Africa and Latin America there are numerous others. Theoretically, the benefits from these arrangements seem particularly attractive for groupings of developing countries, but in practice numerous obstacles tend to hinder their full realization. This is particularly the case of CARICOM, a grouping of small Caribbean economies where the benefits tend to be constrained by their small size and openness, among other things. This paper examines the impact of extra-regional trade on the economic integration effort. After the failed attempt at political union in the English Caribbean in 1961, the search for economic cooperation led to the creation of the Caribbean Free Trade Association (CARIFTA) in 1969. In 1973 the Treaty of Chaguaramas replaced CARIFTA with the Caribbean Community and Common Market (CARICOM) and set the following objectives (Article 3 of the Annex to the Treaty): the strengthening, coordination and regulation of economic and trade relations among Member States in order to promote their accelerated harmonious and balanced development; the sustained expansion and continuing integration of economic activities, the benefits of which shall be equitably shared taking into account the need to provide special opportunities for the Less Developed Countries; the achievement of a greater measure of economic independence and effectiveness of its member states, groups of states and entities of whatever description. In the three decades since 1973, efforts to achieve these objectives have been buffeted by major external shocks. The oil shocks of the 1970s favored the only oil producer in CARICOM, Trinidad and Tobago, and punished all the oil importers. The recession that followed in the industrial countries of North America and Europe curtailed Caribbean exports. And the rise of socialist governments in the Caribbean during the 1970s choked off foreign private investment and crippled economic growth, particularly in Jamaica and Guyana. Unilateral trade concessions provided in the 1980s by the United States (the CBI), Canada (CARIBCAN), and Europe (the Lomé Convention) helped to offset some of the negative impact of these shocks but they also reinforced the extra-regional export orientation of the Community. The 1990s saw a gradual weakening of some of these preferential trading arrangements under the rules of the World Trade Organization. Because of the importance of external markets, these external influences have caused individual member countries to focus more on expanding these markets than on expanding the intra-regional market. As a consequence the urgency of integration has diminished. The region’s ability to benefit fully from economic integration is restrained by four principal factors: the limited mobility of labor and capital, the absence of a common regional currency, the slowness of establishing a common external tariff, and the similarity of products produced for export. The mobility of capital has been limited by cross-country diversity in legislative and development strategies. (IMF Staff Report, 1998). And a regional stock exchange that could enhance capital mobility is yet to emerge. But the biggest restriction on capital mobility lies in the non-convertibility of the national currencies. As a result, the transactions cost of doing business is high and capital does not always move into its most productive uses. There appears to be no great urgency on the part of political leaders about creating a common currency. At their 1984 meeting in the Bahamas, the heads of governments rejected the concept of a single currency, preferring instead to make the US dollar the common unit of exchange by pegging their currencies to it. This reluctance to create a single currency is attributed to the fact that such a currency would require a common monetary policy and therefore a regional central bank – a step that would undermine the sovereignty of national monetary policy. It is the non-convertibility of Caribbean currencies that makes the US dollar the currency of settlement in intra-regional trade. (Williams, 1985). This means that the growth of intra-regional trade is limited by the availability of US dollars. Restriction on labor mobility among CARICOM countries is motivated by the fear that some countries may export their unemployment to others. This was a major contributing factor to Jamaica’s break-away from the Federation of the West Indies in 1961. It saw itself as being inundated by an inflow of labor from the high unemployment economies of the Eastern Caribbean. Some marginal steps have been taken to improve labor mobility. Nine member states have so far agreed to eliminate the need for work permits of CARICOM nationals who are university graduates, artistes, sports persons, musicians, and media workers Eight member states also accept forms of identification other than passports from CARICOM nationals to facilitate inter-island travel.
What’s in an Idea? The Impact of Regulation and
This paper seeks to explore the relationship between government and business through an examination of regulation pertaining to the US agri-food sector. It will be argued that regulation can act as a power resource, determining who appropriates value in the supply chain. However, political intervention in the market creates differentially advantageous positions for some to the detriment of others and, as such, the political allocation of rents is a dynamic process in which firms compete to control this allocation. Thus, a further argument of this paper is that other power resources are available to firms which can be used as countervailing sources of power to undermine and overturn regulation. In particular, the paper will focus on the role of ideas as ‘weapons’, which can be used by firms as resources to overturn unfavourable regulation. This paper will argue that the policy changes brought about under the 1996 Farm Bill (which replaced the New Deal-era target price/deficiency payment structure for feedgrains, wheat, cotton and rice with ‘production flexibility contract payments, thus decoupling the payments from either the commodity price or the amount of croup produced) could only be brought about by a corresponding change in the ideas which underpinned agricultural policy. It will be argued that these policy changes, which favoured agribusiness interests at the expense of production agriculture, were the result of a long-term campaign waged by agribusiness to change the terms of debate within which US agricultural policy was framed. Although ‘decoupling’ had been on the agricultural agenda since as early as the 1950s, the paper will argue that more wholesale changes did not occur earlier because: (1) production agriculture acted as a countervailing interest to agribusiness; and, (2) the farm fundamentalist ideology had become “locked-in” to the AAA, and had become ‘cemented’ institutionally. However, by the 1980s, the agri-food supply chain had become increasingly integrated, with agribusiness assuming far more influence over policy direction than production agriculture and, as such, could more rigorously work to discredit the farm fundamentalist ideology. They launched an ideological campaign, utilising the rhetoric of globalisation, competition and markets to redefine the problems facing agriculture. In doing so, they successfully changed the ideas which framed agricultural policy, which enabled more wholesale policy changes to be implemented. It is my contention that ideas are represented in the policymaking process in the form of ‘policy paradigms’. According to Hall (1993), policy paradigms delineate the boundaries of the policymaking process by prescribing policy goals, instruments and settings. He states (1993: 279) that: policy makers customarily work within a framework of ideas and standards that specifies not only the goals of policy and the kind of instruments that can be used to attain them, but also the very nature of the problems they are meant to be addressing…[T]his framework is embedded in the very terminology through which policymakers communicate about their work, and it is influential precisely because so much of it is taken for granted and unamenable to scrutiny as a whole. Policy paradigms, therefore, perform a “boundary-setting function” and do so in terms of political discourse. However, political discourse is not a given. As Hall (1993: 289) himself states: “the terms of political discourse privilege some lines of policy over others, and the struggle both for advantage within the prevailing terms of discourse and for leverage with which to alter the terms of political discourse is a perennial feature of politics”. Within this framework, it is how issues are defined, which is crucial for understanding how policy evolves. Hall et al. (1978: 59) point out how important problem definition is in affecting whether an issue will even reach the agenda or not: the primary definition sets the limit for all subsequent discussion by framing what the problem is. This initial framework then provides the criteria by which all subsequent contributions are labelled as ‘relevant’ to the debate, or ‘irrelevant’ – beside the point.. Furthermore, drawing on insights from the constructionist approach, I would argue that problem definition can be discursively constructed. Whilst I do not wish to ascribe to the view that all reality is a linguistic construction and a product of human subjectivity, I believe that the constructionist approach is useful in that it draws our attention to the role of agency and the use of language in ideational/discursive constructions. Subject to structural constraints, it is possible for agents to construct ‘realities’ which ‘necessitate’ new policy responses. However, language is not neutral; it is bound up with notions of power, competition and conflict. Fairclough (1989: 90) notes that, in the struggle over language: what is at stake is the establishment or maintenance of one type as the dominant one in a given social domain, and therefore the establishment of certain ideological assumptions as commonsensical…The stake is more than ‘mere words’; it is controlling the contours of the political world, it is legitimising policy, and it is sustaining power relations. Although the American government had been involved in the promotion of agriculture since the 1860s, it has been argued that the AAA radically altered government’s relationship to agriculture. This Act was passed to confront the severe problems within the agricultural sector, which witnessed farm prices dropping by fifty-six percent and gross farm income being halved between 1929 and 1932. However, the AAA did not ‘come out of the blue’; it was the culmination of a more widespread ideational battle regarding the role of the government in managing the economy.
This paper looks at some issues in enterprise restructuring and reforms in Russia. The paper looks at the characteristics of privatization and the Russian corporate governance or the lack of it. The issues of corporate governance and enterprise reforms are particularly important for transitional economies when confronted with the realities of market discipline and global competition. This paper also looks at the efforts to establish a corporate governance system in Russia. An empirical investigation was performed to compare Russia's transition progress to that of other eastern bloc countries such as Poland and Hungary. An investigation of Russia's enterprise reforms and corporate governance may also stimulate institutional changes in Russia and other former socialist countries. In the past decade, post-communist countries of Russia and Eastern Europe have carried out transitional reforms. Efforts have been made to privatize state-owned enterprises (SOEs) by transferring ownership to the private-sector owners. The initial transition efforts paid off in significant gains in real GDP growth for most of the transition countries as Table 1 indicates. Russia, however, experienced negative growth rates and insignificant growth after the transition. The past decade has shown that countries like Poland, Hungary and the Czech Republic are weathering the transition relatively well while Russia and Romania are encountering serious transitional problems. Privatization, in itself, is insufficient to effect a successful transition to a market economy. What is needed is effective privatization complemented by structural reforms in the legal sector to support and enforce the reforms. Privatization has to occur if a post-communist country is to transform its planned, state-owned economy to a market economy. Privatization promotes economic growth when shareowners have an incentive to maximize wealth through firm value. Successful privatization has to consider reforms in three general dimensions: an effective corporate governance system, policies that support business enterprise, and a legal system that protects stockholder rights. The initial phase of privatization is not expected to be the most optimal as evidenced by the negative real GDP growth of most the transition countries. Poland, Hungary, the Czech and Slovak Republics have experienced consistent positive growth in real GDP in the second half of the decade since the transition process began. However, Russia and Romania have the least progress. In Bulgaria and Romania, where the transition governments are weak, and in Russia where there is greater political instability, the privatization programs opened up opportunities for managers to strip enterprise assets and maximize personal cash flows. Consequently, the public's perception of economic injustice from the privatization process undermined the support for privatization, particularly when the standard of living deteriorated after privatization. The examples of the Polish and Czech privatization show, first, if there were no firm and clear consensus by the political authorities as to how shares are to be allocated, the privatization plan would be doomed to failure as in the Polish experience. Secondly, if the government supports and provides an opportunity for the private sector to create investment-holding firms and provides safeguards against fraudulent holding firms, the privatization plan has a better chance to succeed as in the Czech experience. Russia initiated privatization of its state-owned enterprises around 1992. The transfer of SOEs assets to the private sector can occur in three forms of management-employee buyouts (MEBOs) through the voucher system. The privatization had significant impact on the enterprise sector. The enterprise goal emphasis shifted from political and defense targets maximization under planned administration to profit maximization and wealth accumulation. The Russian government recognized the importance of an effective corporate governance system but the notion of worker's rights and privileges is still strong in post-communist Russia. Communist doctrines are still prevalent in emphasizing full employment regardless of workers' productivity and the privatization process which exacerbated the unemployment problems. As a result, the regional governments, with power decentralized to them in the post-communist era, are even more resistant to change and fiercely protect their workers' employment because of their closer ties to the local SOEs. In Russia, as in other transitional economies, depoliticization of the privatization process and the resource allocation process is crucial in severing or reducing dependence on the state by the SOEs and rent seeking behavior by former political elites. Da Cunha and Easterly (1994) found that selected enterprises and financial conglomerates received massive financial flows from the Russian government in the early privatization period from 1992-93 which totaled 33% of the GDP. Due to the no-cost "voucher giveaway" transfer of SOE assets to the MEBO stockholders, the privatized SOEs are resistant to change with little incentive to maximize firm value. The government's lack of fiscal commitment to a hard budget also encouraged Russian managers to depend on the state for soft-budget credits. This only bolstered managers' lack of motivation to change, particularly when they knew that bankruptcy laws were not strictly enforced and difficult to enforce.
Information Communication Technology in New Zealand SMEs
The New Zealand Government has shown a concern to promote the use of information communication technology (ICT) by New Zealand small to medium size enterprises (SMEs). There has been an enquiry into telecommunication regulation and an ongoing commitment to an E-summit programme. The latter involves both Government and enterprise in ongoing dialogue and public fora. In the May 2002 Budget for fiscal year beginning July 1, 2002 Government is introducing a new regional broad banding initiative to assist where private telecommunication companies find it not profitable to upgrade the infrastructure. This study investigates the perceptions of SMEs, as solicited through a quarterly SME survey conducted for the Independent Business Foundation. The survey is now into its third year and provides the opportunity for monitoring changing sentiments and addressing new issues as and when they arise. The perceptions of various groups integrally involved with the small medium enterprises (SMEs) sector, regarding information communication technology (ICT) are analysed in this paper. The Economist Intelligence Unit/Pyramid Research (EIU) study (2001) (www.ebusinessforum.com) into levels of E-preparedness ranked New Zealand 20th down from 16th the year before. While the impact of ICT across the whole business sector is important, it is essential that the SME sector, including the micro businesses, should capture some of the efficiency gains. Government has continued to push ICT but there has been an increasing disquiet that business is not moving quickly enough to catch the knowledge wave. Science Minister, Hon Peter Hodgson, addressing a pharmaceutical conference in March 2002, observed “I have watched us miss the ICT bandwagon, if I can be blunt. And it’s not going to happen again.” (New Zealand Herald, p E3).Trade NZ, a government department, notes the importance of unleashing the potential gains from ICT for SMEs in underpinning their recent programme of assistance: New Zealand has no other option but to adopt e-business and increase participation of its SMEs in the global economy. E-Business has the potential to expand the country’s current exports and grow the number of new exporters. Since uptake of true e-commerce is slow among exporters and other companies, the New Zealand Trade Development Board (Trade New Zealand) has taken on a leadership role through a NZ$10 million project supported by additional funding from the Government. (Trade NZ 2001). In the absence of a commercial imperative or a large stick/carrot regime it may be relatively easy to succumb to complacency in times of reasonable economic growth. Currently, agricultural exports are doing relatively well given the higher international prices for commodities. Nevertheless, it is generally recognised that long-term sustainable competitive advantage needs to be built upon a strong foundation in the knowledge economy. With a small population, a relatively open economy, heavy compliance regimes relating to occupational-safety and health, resource management, employment relations, and the burden of social welfare vis B vis other emerging knowledge economies there are multiple challenges to be faced. The SME sector, and in particular the micro business sector, is a very large component of the New Zealand economy. If ICT offers the opportunity of reducing costs and enhancing supply chain efficiency, then it is important that these potential gains accrue to the SME sector. At the national level telcos (telecommunication companies) continue to dispute interconnection agreements. “After Telecom refused to switch WorldxChange’s toll bypass phonecalls to Clear’s network, WorldxChange complained to the Commerce Commission on Friday June 1 accusing Telecom of abusing its market power” (New Zealand Herald 9 June, 2001). It is generally true that competition works to limit the extent to which there are dead weight losses in the system (Williamson 1996, p.197). However, the ICT environment reflects an ineffectual regulatory and compliance policy framework, which is a typical problem in a heavily bureaucratic structure where administrative process is the objective rather than tangible efficiency gains. The EIU makes this point forcefully commenting, “The importance of a regulatory regime geared to e-business is clear in our rankings; it is the main factor that puts Australia 18 places ahead of its neighbour New Zealand, which ranks only 20th.” This is despite the government’s involvement in a number of initiatives such as E-summit and ECAT (Electronic Commerce Action Team). The efficacy of these policies needs to be considered in the light of New Zealand deteriorating international ranking. Government policy, in New Zealand, relating to small to medium enterprises (SMEs) has altered significantly during the last decade (Nyamori and Lawrence, 1997). The changes have not followed a consistent pattern but rather have promoted considerable uncertainty in the environment. Commenting on the then most recently announced policy for SMEs Welham (2000, p41.) suggests, “they are ‘reinventions of the wheel’ for it has all been done before.” Scrimgeour and Locke (2001) review the decade from 1990, concluding that Government policy in a range of areas appears, among SMEs to have a low credibilityThe SME survey has been conducted quarterly since 1999. The telephone interview consists of two parts. First, there are questions relating to the level of operating activity and these are asked each quarter. In addition several special interest questions are asked. These typically relate to the topical issues and the responses are prepared for business professional magazines. The minimum sample size of 400 provides a level of margin of error of less than 5%. The typical survey consists of 1,200 calls to allow for meaningful regional and industry comparisons to be made. Sample selection is generated from ‘yellow pages’ telephone listings. The sample is programmed subject to constraints. Specifically two parameters are considered. First, the regions are balanced to ensure that more than 30 enterprises are selected in each chosen region. This biases the sample against the geographical concentration of Hamilton north. Similarly, the industry profiles are not representative of the proportions operating in the economy by rather ensure that minimum samples sizes are greater than 30.
The Internationalisation Process of UK Biopharmaceutical SMEs
Dr. Cãlin Gurãu, School of Management, Heriot-Watt University, Riccarton, Edinburgh, Scotland
The classical models and theories of internationalisation have considered export and out-licensing activities to be the main modes of entry on international markets. The structural changes in the global economy, the emergence of high technology industries and the increased involvement of SMEs in international activities are challenging these theories. The development cycle of new products and technology has become long, complex and extremely costly. The lack of specialised resources on domestic market has forced the high-technology SMEs to initiate early their internationalisation in order to access essential complementary resources on a global basis. This paper investigate the internationalisation model and the entry modes of UK Biopharmaceutical SMEs. Gurău and Ranchhod (1999) have shown that biotechnology is an industrial sector in which internationalisation is likely to occur, because: (the sources fuelling the biotechnology industry are international (i.e. finance, knowledge, legal advice, etc.) (Acharya, 1998 and 1999; Russel, 1988); (the marketing of biotechnology products and services is international (Acharya, 1999; Daly, 1985); (the competition in the biotechnology sector is international (Acharya, 1999; Russel, 1988); (the international community (Acharya, 1999; Bauer, 1995; Nelkin, 1995; Russel, 1988) closely scrutinizes the scientific or industrial developments in biotechnology. The large pharmaceutical and chemical corporations which began to diversify their activity into biotechnology from the early eighties, had the managerial expertise and the financial resources to develop this activity on a global basis (Daly, 1985). They used their existing networks of international assets in order to solve the problems related to the novel technologies and emerging markets and to defend their dominant position within the industrial markets (Chataway and Tait, 1993; United Nations, 1988). On the other hand, the small and medium-sized biotechnology enterprises (SMBEs) are confronted with important problems in their process of internationalisation: limited financial resources, the management and processing of huge amounts of information, restrictive regulations, unfamiliar market environments, etc. These represent significant entry barriers on the foreign markets (Acs et al., 1997; Chataway and Tait, 1993; Daly, 1985; OECD, 1997). In spite of these problems, the global competition and the structural limitations of their domestic market compel them to become international (Acs and Preston, 1997; Fontes and Coombs, 1997; Daly, 1985). This paper attempts to investigate the internationalisation model specific for the U.K. small and medium-sized biopharmaceutical enterprises (SMBEs), with a special emphasis on the market entry modes designed and implemented by these firms. The classical internationalisation theories are mainly based on two models: the Uppsala model, developed by Johanson and Wiedersheim-Paul (1975), and then refined by Johanson and Vahlne (1977, 1990); and the Management Innovation model, described in the work of Bilkey and Tesar (1977), Cavusgil (1980), Czinkota (1982) and Reid (1981). The evolution of the company from a mainly domestic activity to a fully international profile is described as a slow, incremental process which involves the gradual acquisition, integration, and the use of knowledge concerning the characteristics of foreign markets, as well as an increasing commitment of the company’s resources towards international activities. The model also predicts that a firm will first target the markets that are most familiar in terms of language, culture, business practice and industrial development, in order to reduce the perceived risk of the international operations and to increase the efficiency of information flows between the firm and the target market (Johanson and Vahlne, 1977 and 1990). The classical theories of internationalisation have been extensively challenged over the years, with numerous scholars advancing various criticisms of their validity and assumption (Knight and Cavusgil, 1996). These criticisms helped to refine the outline of the previous models, either regarding the incremental characteristics of the internationalisation process (Johanson and Vahlne, 1990), or the main causes and factors that determine and influence the evolution of a company through different stages (Reid, 1984; Welch and Luostarinen, 1988). Cavusgil and Zou (1994), Reid and Rosson (1987) and Welch and Luostarien (1988) show that the initiation of international operations is usually the result of careful strategic planning, which takes into consideration a wide array of factors such as the nature of the foreign market, the firm’s resources, the type of product, the product life cycle, and the level of anticipated demand in the domestic market. On the other hand, the path to internationalisation does not have to necessarily follow the prescribed stages of development, with many other combinations of strategic options being available to the companies (Reid, 1983; Rosson, 1987; Turnbull, 1987). For example, initially, international sales can be realised through a joint venture or an international network of strategic alliances (Hakansson, 1982). Other companies may also become international following alternative paths such as licensing, manufacturing or collaborative arrangements, without ever engaging in export activities (Carstairs and Welch, 1982/1983; Reid, 1984; Root, 1987). The definition of small and medium-sized enterprises has fluctuated over the years, using as main criteria the number of employees and the annual turnover. For the purpose of this study firms with up to 50 employees will be considered to be small and medium-sized as those employing 51-500 people. This classification is used by the UK Centre for Exploitation of Science and Technology (Keown, 1993). It is widely accepted that SMEs have characteristics different from larger companies (Carson et al., 1995; Jennings and Beaver, 1997). These differences are reflected by three main features (Levy and Powell, 1998):- the SMEs have limited internal resources;- the SMEs are managed in an entrepreneurial style;- the SMEs have usually a small influence on the market environment. The specificity of the biopharmaceutical sector creates a series of problems and advantages for the international marketing activities of SMBEs.
Impact of Company Market Orientation and FDA Regulations on Bio-Tech Product Development
Dr. L. William Murray, University of San Francisco, CA
Dr. Alev M. Efendioglu, University of San Francisco, CA
Zhan Li, Ph.D., University of San Francisco, CA,
Paul Chabot, Xis, Inc., San Francisco, CA
New products produced by Bio-Technology firms – products designed to treat, or cure, human illnesses – require large investments ($150 million +) and take a long time (10-12 years) from idea generation through product launch. These products require full authorization by the U.S. Food and Drug Administration (FDA) before the developing firms are permitted to sell them for use by patients. Little is known about the management processes by which these products are developed. Even less is known about the impact of the FDA regulation on the manner in which these products are developed, produced, and distributed. The purpose of this paper is to report the results of a recent survey of professionals employed by Bio-Tech firms to develop new products. The FDA must approve all new pharmaceutical and medical device products designed for use by individuals. A firm interested in developing a new pharmaceutical product must file an application with the FDA, state the goal and define the approach towards discovering possible new products, and provide the FDA with a detailed statement as to how the development process will be managed. If approved, the firm can take the first steps towards developing the product, each step of which must be recorded, analyzed, and summarized in performance review reports to the FDA. Three earlier studies researched the possible impacts of FDA regulations on and marketing of new products. An earlier study of development and production process of diagnostic-imagining equipment suggested that for this type of medical devices FDA regulation had little effect. A later study by Rochford and Rudelius (1997) suggested that there are regulatory influences and impacts on product development, if one examines the number of development activities (i.e., stage gates) that the firm performed in their development of a new product. A more recent third study of medical device producers by Murray and Knappenberger (1998) further elaborated the relationships between product regulation, the manner in which the product was developed, and the market success of the new product. It concluded that the act of regulation increased the new products “time to market”; i.e., the amount of time it took the firm from idea generation through final product launch. Other research studies have looked at how collaboration among process partners, relationships between the firm and its customers, and managerial effectiveness impacted the success of new product development. Langerak, et.al., (1997) reported that developers of new products found the more turbulent the environment in which the product was being developed the greater the importance (to market success) of both internal, within-firm, and external, between-firms, collaborations. Since most new pharmaceutical products are the results of collaborative efforts it would appear to be likely that such collaborations would be even more successful in the economically, socially, and politically turbulent world of drug development. Avlonitis and Gounaris (1997) found that the more “oriented” the firm towards its market the greater the probability of market success of its products and of the firm in general, and in an analysis of the banking industry, Han and Kim (1998) discovered a direct link between the firm’s orientation and its organizational effectiveness, as suggested earlier by Ruekert (1992). Bio-Tech firms develop and market products that are strictly regulated and as such, have to deal with a very diverse set of customers and meet their divergent objectives. The new products must be approved (regulatory process) for sale, they have to be sold not to but through (distribution channel) the medical community, they must be “approved” for reimbursement by the patient’s insurance company or HMO (payment for the product), and finally, must meet the patient’s needs (gain value based on need). In developing and marketing a new product, the Bio-Tech companies have to address and accommodate these four distinct and different customer objectives and be successful in meeting their primary organizational profit objective. However, given the high degree of specialization in one or more of the tasks required to produce a new product, the high cost, and the very long time it takes to market these products, it is difficult to judge whether any of these firms has really succeeded in their efforts.
Country-of-Origin Effects on E-Commerce
Dr. Francis M. Ulgado, DuPree, Georgia Institute of Technology, Atlanta, GA
This paper examines the Country-of-Origin effects in an e-commerce environment. In addition to Country-of-Brand and Country-of-Manufacture effects, the paper investigates the presence and significance of Country-of-E-commerce Infrastructure. It develops hypotheses regarding such effects amidst varying customer and market environments, such as business vs. consumer buyers, levels of economic development and product type, and proposes a methodological framework to test the hypotheses. Recent years have witnessed a rapid increase in the range of multimedia technologies available internationally. Among them, the Internet technology has dramatically changed the shopping environment for individual consumers and businesses throughout the globe. The number of consumers worldwide purchasing through business-to-business as well as business-to-consumer e-commerce media ("e-commerce hereafter) has been skyrocketing these days. However, preliminary statistics indicate that the level of growth and development of internet and e-commerce infrastructure varies across countries and has generally lagged behind the United States. Meanwhile, current research has also indicated the continued prevalence of country-of-origin effects on consumer perception on products or services that they purchase. This study investigates the presence and significance of country-of-origin effects on buyer perception in the e-commerce environment. While, country-of-brand and country-of-manufacture dimensions have been investigated in the past, this paper adds country-of-e-commerce infrastructure effects. These three variables are selected to be examined under different business-to-business, business-to-consumer, and level of development environments. The size of the worldwide market for e-commerce was about 66 billion dollars in 1999 and is expected to grow to about 1 trillion dollars this year. In the U.S. alone, this is expected to reach $33 billion by the end of this year (Nielsen//Net Ratings Holiday, E-Commerce Index, 1999). While this significant global growth is widely expected and documented, it has also been observed that the rest of the world lags behind the United States. In contrast to the U.S. for example, regions such as Asia, Latin America, and Eastern Europe are behind in development and growth of e-commerce in terms of infrastructure, buyer acceptance, and use. Moreover, different countries themselves also exhibit varying degrees of growth and development relative to their neighbors in the same region. Even amongst developed countries such as Canada, Japan, and Western European nations, the U.S. remains far ahead of the game. It is therefore not surprising that according to recent studies, U.S. web sites such as Yahoo! or Amazon dominate the international market. Similar studies also indicate that in general, business-to-business e-commerce so far exceeds business-to-consumer transactions on the internet. In addition, while the internet may be seen as a marketing medium or tool that would globalize business, the literature suggests that the varying cross-cultural environments across countries in terms of the legal, political and cultural variables have resulted in different purchase behaviors and attitudes towards e-commerce. In countries such as China, government regulation and intervention in e-commerce has been more significant, resulting in a relatively more politically influenced and legally constrained commercial environment. The uncertainty and risk resulting from such a situation has hampered the development of the infrastructure and the attitudes of consumers. Finally, even in "wired" cosmopolitan Hong Kong, cultural traditions and preferences have hampered U.S.-proven e-commerce formats such as online grocery shopping. The lack of supporting financial infrastructure in other countries has also hampered the development of e-commerce. For example an Asian-based online toyseller has resorted to processing online orders through U.S. banks since no local banks are willing to do so. In more economically developed areas of the world, most consumers in European countries are legally required to pay by the minute while online, significantly influencing their ability to participate in e-commerce. Even in communications technology-savvy countries such as Sweden, Denmark and Finland most households use the internet primarily for activities such as e-mail, information, and working at home, and significantly less for e-commerce. In other countries that do exhibit e-commerce activity research has found varying consumer behavior. For example, web-site preferences have been shown to vary across e-consumers of the United Kingdom, France and Germany. Given such varied and complex cultural, legal and political influences on the e-commerce experience in other countries, it is not surprising that internet-based companies have had difficulty expanding their markets internationally. For example, London-based Boo.com, a sports and urban fashion "e-tailer", has had its European expansion of its multicultural brands to a sophisticated clientele stymied by different software, supply chains, currencies, EU regulations, and tax and customs laws. One issue that such international e-marketers face is the possible influence of country image on their potential buyers. The influence of the perception of a country by a consumer can significantly affect their perception of a product or service associated with that country and the resulting buyer behavior. Such influence has been termed "Country-of-Origin" effects on consumer perception. Various research have offered a range of definitions explaining COO (e.g., Bilkey and Nes 1982; Han and Terpstra 1988; Johansson, Douglas, and Nonaka 1985; Thorelli, Lim and Ye 1989; Wang and Lamb 1983). However, as we have increasingly found the separation of manufacturing or assembly location from the country with which the firm or brand is associated with, the term "origin" has become vague.
The FASB Should Revisit Stock Options
Dr. Ara Volkan, State University of West Georgia, Carrollton, GA
Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, firms to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. Moreover, FAS 123 did not require the quarterly calculation and disclosure of the option expense. The primary purpose of this paper is to highlight the flaws in FAS 123 and explore alternative methods of accounting and reporting for stock options that address these flaws. In addition, two studies that evaluate the impact of these alternatives have on annual and quarterly financial statements are analyzed. Finally, accounting procedures are recommended that will report more reliable and useful information than current rules provide. Given that two Congressional Subcommittees are intending to propose fair valuation and expensing of stock options when they finish their investigations into the Enron debacle, the content of this paper is both timely and relevant. Accounting for employee stock options has been a source of controversy since Accounting Research Bulletin No. 37 was issued in November 1948. Subsequent pronouncements, Accounting Principles Board Opinion No. 25 (APBO 25) issued in 1972 and Financial Accounting Standards Board (FASB) Interpretation No. 28 issued in 1978, continued the tradition of allowing the fixed stock option plans avoid recording compensation expense as long as the exercise price was equal to or exceeded the market price at the date of grant. In 1995, after more than 12 years of deliberation, the FASB issued Statement of Financial Accounting Standards No. 123 (FAS 123). The pronouncement encouraged, but did not require, companies to adopt a fair value pricing model to measure and recognize the option value at the grant date and record a portion of this amount as an annual expense over the vesting period of the option. The firms that chose not to follow the recommendations of FAS 123 could continue to use the requirements of the APBO 25. These firms had to disclose the pro forma impact of FAS 123 requirements on their annual earnings and earnings per share (EPS) in the footnotes of their annual reports. However, FAS 123 did not require the quarterly calculation and disclosure of the option expense. The primary reason the FASB did not require companies to record an option expense was pressure from the business community and Congress. Because of this pressure, the FASB reversed the accounting proposals contained in its Exposure Draft – Accounting for Stock-Based Compensation (ED) issued in 1993 and opted for a realization and footnote disclosure approach in FAS 123 as opposed to the realization and financial statement recognition approach that was advocated in the ED. Another major obstacle to recognition was the narrow scope of the definitions of assets, expenses, and equity provided in Statement of Financial Accounting Concepts No. 6 (SFAC 6). Thus, the FASB was not entirely successful in delivering on its stated intention to provide neutral accounting information to assist users in assessing investment opportunities. To its credit, the FASB has recognized that SFAC 6 should be revised to address certain transactions that under current standards are not properly measured, recorded, and reported. Thus, in a pair of October 27, 2000 exposure drafts (file reference numbers 213B and 213C) concerning accounting for financial instruments with characteristics of liabilities, equities, or both, the FASB noted its intention to amend the definition of liabilities to include obligations that can or must be settled by issuing stock. The reporting requirements of APBO 25 can result in vastly different treatments for compensation packages that have similar economic consequences to both the employer and the employee. For example, a company that issues stock appreciation rights (SARs) must record compensation expense for any increase in the market value of the stock between the grant date and the exercise date, whereas, no compensation expense is recorded for a fixed employee stock option with similar cash flow consequences. The primary purpose of this paper is to highlight the flaws in FAS 123 and explore alternative methods of accounting and reporting for stock options that address these flaws. In addition, two studies that evaluate the impact of these alternatives have on annual and quarterly financial statements are analyzed. Finally, accounting procedures that will report more reliable and useful information than current rules provide are recommended. The following sections briefly discuss the current requirements for accounting for stock options and evaluate the other approaches previously suggested by the FASB. Next, alternatives are offered that are superior for measuring compensation expense on both annual and quarterly basis and are consistent with accounting for other expenses. A recent article in the Wall Street Journal (June 4, 2002) disclosed that stock options now equal more than half of top CEOs’ compensation. For the top 200 industrial and service companies ranked by revenues, 58% of executive compensation for 1999-2001 was in the form of stock options. It is clear that stock option plans are valuable tools for most companies. If the option has a value as of grant date, the value should be an expense to the company, reducing both net income and EPS. Yet under the APBO 25 and the popular alternative allowed under FAS 123 requirements currently in force, assuming an exercise value at or above the market value of the stock at the grant date, no expense would be recorded. On rare occasions, when the market price exceeds the exercise price at the grant date and a compensation expense arises from the issuance of a stock option, the employer must record the difference as a debit to deferred compensation expense and allocate it as compensation expense to the periods in which the services are performed. However, changes in stock prices during the service period are not taken into consideration. From an employee’s perspective, the option takes on value when the market price of the stock exceeds the exercise price. From the firms’ perspective, costs are incurred when stock is issued to the employees at the reduced price since the firm gives up cash it could have received by selling the shares in the market instead of to employees. Thus, future market conditions are relevant to both the employee and the employer. Attempts to measure future costs without incorporating the most current market conditions can result in poor estimates. In comparison, for variable plans, such as SARs, which entitle an employee to receive cash, stock, or a combination based upon the appreciation of the market price above a selected, per share price over a specified period, the total compensation expense is determined at the measurement date, which, for SARs, is generally the exercise date. Therefore, between the grant date and the exercise date, the compensation expense must be estimated. The estimated compensation expense and associated liability are determined on a quarterly basis by multiplying the number of SARs by the difference between the market price of the stock and the SAR base price. Amortization is required over the lesser of the service or vesting period. However, after the service or vesting period ends, compensation expense continues to be adjusted based on fluctuations in the market price until the SARs expire or they are exercised. A comparison of these two stock compensation plans (fixed stock options and SARs) indicates that while the economic and cash flow consequences appear to be identical (i.e., unfavorable impact on cash flows either in the form of payments to the holder or issuance of stock at a price lower than the market price), their accounting requirements and resultant quarterly income statement and balance sheet effects differ substantially. In January 1986, the FASB agreed that the compensation cost of stock options and stock award plans should be measured at the date of grant by using a Minimum Value (MV) model. However, the FASB reversed itself six months later and agreed that costs should be measured using a fair value model and at the later of the vesting date or the date on which certain measurement factors, including the number of shares and purchase price, would be known. The FASB initially embraced the MV method because it was believed to be conceptually sound, objectively determinable, and easily computed. The MV of an option is defined as the market price of the stock minus the present values of the exercise price and expected dividends, with a lower bound of zero.
The world of marketing channels is changing. A deepened focus on the customer experience, micro segmentation, and the use of technology is leading to two key developments. First, an increasing number of companies are moving toward using flexible channel systems. Microsoft’s bCentral is an example of a company that has built a flexible channel system. The second development in the world of marketing channels is that companies are reaching customers using multiple media. These media afford marketers the opportunity to reach customers in the way they would like to be reached, and deliver an ever more customized buying experience to them. Avon is an example of a company that has moved from using just one way to reach customers to multiple in a span of few years. These two developments create a host of new challenges for marketers. They must decide whether to use flexible channel systems or use vertically integrated distributors/retailers. What criteria should be used to make these choices? And if a flexible channel system is used, what organizational changes should be made in order to work effectively with channel partners? What new skills and resources are needed by a marketer to work effectively with members of a flexible channel system versus vertically integrated distributors? The spread in the use of new media raises the difficult issue of how a marketer can integrate all of the various media to deliver an experience customers can actually enjoy. What does channel integration mean anyway, and how should this integration be realized?
Institutional and Resource Dependency Effects on Human Resource Training and Development Activity Levels of Corporations in Malaysia
Dr. Zubaidah Zainal Abidin, Universiti Teknologi Mara, Shah Alam, Malaysia
Dr. Dennis W. Taylor, University of South Australia, Adelaide, Australia
This study considers managerial motives and orientations affecting decisions about levels of employee training and development (T&D) activities. Specifically, arguments drawn from institutional theory and resource-dependency theory are used to articulate variables that seek to uncover these managerial motives and orientations. Using listed companies in Malaysia, a field survey was conducted amongst two groups of managers deemed to have influence on the determination of annual T&D budgets and output targets, namely, human resource (HR) managers and finance managers. The results reveal that T&D activity levels are affected by institutional-theory-driven components of management dominant logic and by perceived organizational resource dependencies on employees versus shareholders. But there are contrasts in the significance of these variables as perceived by HR managers compared to finance managers. In Malaysia, there is a relatively high level of investment in human resources (mainly training and development expenditure) by companies. The federal government’s Human Resource Development Fund (HRDF) was established in 1993. Its purpose has been to encourage and help fund human resource investment activities by companies. Through reimbursements of eligible T&D expenditures, the HRDF scheme in Malaysia provides corporate managements with a strong incentive to allocate budget expenditure to T&D programs and to report on numbers of employees trained and developed. But Malaysian companies have not been consistent in taking advantage of this government scheme. This is evidenced by variability in the ratio of levies collected to claims paid by the HRDF on a company-by-company basis, suggesting that corporate managements treat T&D activity levels as quite discretionary in their planning and annual budgeting. What factors influence management’s choice of the annual T&D activity level? This study will focus on whether the level of T&D activity is determined by variables embedded in institutional and resource-dependency theories. The motivation for addressing this research question is that insights can be provided about management behaviour in an operating functional area of the company (i.e., investment in human resources) that has economic or human consequences of relevance to employees, shareholders and government oversight bodies. To employees, T&D programs provide the means of maintaining their own competitiveness within their employer organization by improving knowledge, skills and abilities, especially if their current workplace environment is dynamic and complex (Lane and Robinson, 1995). To shareholders, T&D expenditure is seen as reducible in times of economic stringency in order to meet short-term profit targets, but the importance of knowledge and intellectual capital is also recognized as critical in business success (Pfeffer and Veiga, 1999). To government oversight bodies (such as the HRDF body in Malaysia), levels of corporate T&D are broadly viewed as improving the value of the country’s human capital (Huselid, 1995). This study empirically investigates the relationship between corporate T&D activity levels and the factors which influence the thinking of top HR managers and finance managers involved in the setting of T&D budgets and T&D output targets in their company. The study is confined to an investigation of large listed corporations in Malaysia, and to two players in the top management team (i.e., the HR manager and the finance manager/controller). Institutional and resource dependency theories are invoked in this study. These theories are widely used as underlying perspectives to inform empirical research into managerial behaviour. Given that both the top finance manager and HR manager will have a substantial influence on the determining of their corporation’s annual T&D budget and targets, their more broadly developed motives and orientations would be expected to have a causal relationship to their company’s actual T&D budgets and outputs. Ulrich and Barney (1984) argued that there was often an under-examining of many important similarities and differences in organization behaviour research due to a lack of comparison and integration among perspectives of organizations. They contended that a multi-perspective approach to organizational research would help to more fully explain certain behaviors and their implications. As Hirsch et al. (1987) claimed, the strength of organizational research is its “polyglot of theories that yields a more realistic view of organizations”. The two perspectives of institutional theory and resource dependency theory are two alternative ways of thinking about influences on T&D activity level decisions. Each perspective has a different frame of reference. In the institutional perspective, the phenomenon of isomorphic behaviour that tends towards legitimization of management’s actions is the key frame of reference. As managers engage in isomorphic behaviour, the accumulation of legitimacy concerns begins to take on its own structure. In the resource dependency perspective, the structure of concern is the organization and its bundle of agency relationships. In this sense, there is an aggregating relationship between organizations and their resource dependencies. Resource dependencies are bundles of resource-providers (particularly human and financial resource-providers) that have certain characteristics in common. Explanatory variables arising from the isomorphic dimensions of institutional theory, are identified by Kossek et al. (1994) in the notion of managerial dominant logic (MDL). The concept of management dominant logic, which was first developed by Prahalad and Bettis (1986), includes managerial practices, specific skills used by key actors, experiences stored within the organisation and cognitive styles used to frame problems in a specific ways (Bouwen and Fry 1991). According to Prahalad and Bettis (1986), a dominant logic can be seen as resulting from the reinforcement of results from doing the ‘right things’ with respect to a set of business activities. In other words, when top management effectively performs the tasks that are critical for success in the core business they are positively reinforced by economic and social successes. This reinforcement results in top management focusing effort on the behaviours that led to their success. Hence they develop a particular mindset and repertoire of tools and preferred processes. This, in turn, determines the approaches that they are likely to use in resource allocation, control over operations and approaches to intervention in a crisis. Kossek et al. (1994) used this notion to examine HR manager’s institutional pressures to support the adoption of employer-sponsored childcare as a form of organisational adaptation to change. They found three dimensions of MDL. These MDL dimensions were labelled ‘management control’, ‘environmental’ and ‘coercive’. These dimensions form an overall management orientation toward employer-sponsored childcare. Kossek et al’s study supports previous research on the link between work practices and institutional influences (for example, Tolbert and Zucker, 1983; Eisenhardt, 1988; Scott and Meyer, 1991). But no previous study has directly tested the belief that MDL variables affect managers’ decisions about T&D activity levels. Nevertheless, it is reasonable to speculate that such relationships may exist.
Back-Testing of the Model of Risk Management on Interest Rates Required by the Brazilian Central Bank
Dr. Herbert Kimura, Universidade Presbiteriana Mackenzie and Fundação Getulio Vargas, São Paulo, Brazil
Dr. Luiz Carlos Jacob Perera, Universidade Presbiteriana Mackenzie and Faculdade de Ciências Econômicas, Administrativas e Contábeis de Franca FACEF, São Paulo, Brazil
Dr. Alberto Sanyuan Suen, Fundação Getulio Vargas, São Paulo, Brazil
The model proposed by the Brazilian Central Bank for interest rate positions represents the first attempt of the regulator to define a quantitative methodology to assess market risk of portfolios. Since the model allows discretion in the establishment of different criteria for interpolation and extrapolation of interest rates, it is possible that banks may reduce their capital requirements simply by using different methods of defining the term structure. This study will verify the impact of such methods in the assessment of interest risk, specially in the case of the Brazilian high volatile market. In addition, it will be discussed, through the presentation of simulations, if the model defined by the regulator can influence the willingness of financial institutions to assume more credit risk by lending to counterparts with poor credit ratings and by making more long term loans. Following guidelines suggested by the Basle Committee, the Brazilian Central Bank has regulated rules for capital requirements in function of the assumed market risk. Brazil initiated efforts to set an specific regulation related to the market risk with the emission of the legislation of the risk evaluation of positions exposed to the fixed interest rates fluctuation, according to the parametric model of variances and covariances. Having in mind the complexity of the risk factors in the Brazilian economy clearly subject to the major fluctuations of the market parameters, it is important to the Brazilian financial institutions to implement tools to evaluate risks, allowing a better estimation on the potential losses. Exemplifying the Brazilian economic scene, despite the relative success of the stabilization plan implemented in 1994 seeking to reduce inflation that reached more than 80% in March of 1990, the interest rate is still one of the highest in the world (around 20% per year), having reached 47% per year during the 1997 Asian crisis. Besides, in 1999 the Brazilian currency was devaluated almost in 50% in only one month, due to the investors’ crisis of confidence of the conduction of the economic politics. In such great volatility context, the major part of the Brazilian banking segment has implemented several methodologies to analyze the risk measurement both through the value-at-risk measures and through projections in stress tests. In function of the Brazilian economy specificities, the market practices have been more demanding in some requirements than the own international regulation. For instance, while the Basle Committee demands quarterly changes of the variance-covariance matrix, the Brazilian Central Bank determines daily risk parameters to the prefixed rates. The own financial institutions, in the majority of cases, revaluate daily the statistical parameters of risk, incorporating daily changes to the correlations between variables associated to the risk factors of the market. In this research, from the configuration of portfolios with financial assets and data from the Brazilian markets, it will be estimated maximum potential losses, using those models and parameters of the variance and covariance value-at-risk set by the Brazilian Central Bank. Seeking to verify if the Brazilian regulation of the market risk reflects adequately the potential fluctuations of the term structure of the interest rate, it will be performed, in this study, back-testing procedures on the required methodology by the Brazilian Central Bank. Thus, the article tries to identify if the mathematical model imposed by the Brazilian Central Bank to calculate the value-at-risk is conservative or aggressive with relation to the effective losses in function of the Brazilian interest rate fluctuation. In March of 2000, the Brazilian Central Bank issued the Document 2972, which sets the calculation rules to determine one of the terms comprising the equity demanded from the financial institutions operating in Brazil, in function of the interest rate risk level of the market in national currency. Such parcel depends on the value-at-risk, calculated according to the parametric model of specific variances and covariances, whose calculation and parameter procedures are publicly available, assuring to the financial market the transparency regarding the rules adopted. The mathematical model of the risk calculation of fixed rate positions involves primarily the establishment of the cash flows expected by the financial institution. Such procedure requires an initial processing of information having in mind that some financial instruments are stored in the database considering their face value while other operations are calculated from the present value accrued by the interest rate specified in the funding or application operation. Besides, it is important also to identify which positions are exposed to the interest rate variation, as for instance, fixed legs of future contracts. The mark to market is accomplished with spot interest rates built-in in interest rate futures or swap contracts of post-fixed rates for pre-fixed rates traded in the São Paulo Commodities and Futures Exchange. Due to the simplifying purpose of the model and the Brazilian market features, it is presumed the hypothesis that the forward rate between the maturity of future contracts and swaps is constant, and for terms higher than 2 years the spot interest rates is equal to the 2 years’ interest rate. Besides, having in mind the market practice, the interest rates are effective for a 252 working days term, equal to one year. Thus, to bring the present value of a cash flow occurring in a T moment, it is used the following interest rate: where R0 is the rate of the 1 day inter-banking deposit’s certificates (CDI) and Rj.j+1 is the forward term implicit by the j-th and the j+1-th CDI’s maturity, future or swap contracts and where Tj is the maturity, in working days, of the specified operations.
Analysis of Dynamic Interactive Diffusion Processes of the Internet and New Generation Cellular Phones
Dr. Kaz Takada, Baruch College/ CUNY, New York, NY
Dr. Fiona Chan-Sussan, Baruch College/ CUNY, New York, NY
Dr. Takaho Ueda, Gakushuin University, Tokyo, Japan
Dr. Kaichi Saito, Nihon University, Tokyo, Japan
Dr. Yu-Min Chen, J. C. Penney, Dallas, TX
NTT DoCoMo has been experiencing an unprecedented success with its i-mode cellular phone services in Japan. In this study, we analyze the diffusion of the i-mode and other second generation (2-G) cellular phones, and its dynamic interactive effect on Internet diffusion is modeled and empirically tested with the diffusion data. The empirical results clearly support the hypothesized relationship between the two indicating that in the short term the rapid diffusion of the 2-G has an negative effect on the diffusion of Internet. However, we contend that in the long run the diffusion of these technologies should exert positive and complimentary effect on each other. The introduction of NTT DoCoMo cellular phone i-mode services in 1999 has brought Japan to become the number one mobile commerce (m-commerce) nation by 2001. The success of the i-mode service is of such a phenomenon that every major newspaper and magazine has had at least one article written about its success in the last twenty four months (Barrons 2000; Business Week 2000; Fortune 2000, among others). How does the i-mode phenomenon affect the traditional Internet diffusion through the use of personal computers (PC), and how does it affect the future of Internet diffusion? The i-mode represents a new generation of the cellular phone, and it is capable of performing various functions beyond the traditional voice based cellular telephones. The major characteristics of the i-mode cellular phone according to NTT DoCoMo are that, with the i-mode phone, people can access online services including balance checking/fund transfers from bank accounts and retrieval of restaurant/town information. In addition to conventional voice communications, users can access a wide range of sites by simply pressing the i-mode key. The service lineup includes entertainment, mobile banking and ticket reservations. The i-mode employs packet data transmission (9600bps), so communications fees are charged by the amount of data transmitted/received rather than the amount of time online. The i-mode is compatible with Internet e-mail and can transfer mails between i-mode terminals. Packet transmission allows sending and receiving of e-mail at low cost. The i-mode, although dominant in the market, is not the only service. Other services from different service providers offer the cellular phones services with comparable features and capabilities. In this study, we analyze the effect of the introduction of this new second generation (2-G) cellular phones in Japan. Specifically, our research question is posited as follows: Does an explosive growth of the second generation cellular phones lead to stimulating the adoption of Internet access among the Japanese households, or suppress its adoption? Diffusion research in marketing has a rich literature. Since Bass (1969) published his seminal work on the new product growth model, hundreds of papers have been published in the leading marketing and management journals (Mahajan, Muller, and Bass 1991 for their comprehensive review and references therein), and various modifications and refinements have been made to the original Bass model. The studies demonstrate that the Bass model has a superb forecasting capability for durable goods with a limited data available. More importantly, the model can provide valuable information on diffusion processes such as the coefficients of external influence (p) and internal influence (q) on diffusion processes and of potential market size (m). An interactive effect of diffusion process between the second generation cellular phones and Internet, which we are to analyze in this study, is rather unique in the diffusion literature, and not many studies have tackled this problem. Norton and Bass (1987) analyzed the substitution effect of the diffusion processes of the successive generations of high technology products. Their comprehensive substitution effect model is capable of capturing, for example, Intel’s introductions of CPUs with a remarkable precision. Their study has implications for our study in so far as diffusion processes of different products or technologies have a substitution effect as delineated by the research question aforementioned. However, their study assumes that the substitution effect occurs among the innovations of the improved technologies. Our study, on the other hand, deals with two different innovations, thus the nature of substitution effect of our study is vastly different from theirs. The new product growth model is proven to be a very effective model to analyze diffusion processes across different countries and cultures. Since Gatignon, Eliashberg, and Robertson (1989) and Takada and Jain (1991) applied the Bass model to international marketing data, numerous studies have attempted to analyze diffusion processes of a variety of products and services in international marketing (Ganesh, Kumar, and Subrmaniam 1997; Helsen, Jedidi, and DeSarbo 1993; Kalish, Mahajan, and Muller 1995; Putsis, Balasubramanian, Kaplan, and Sen 1997; Tellefsen and Takada 1998; Dekimpe, Parker, and Sarvary 2000, among others). Takada and Jain (1991) analyzed diffusion processes of durable goods in the Pacific Rim countries including Japan where the i-mode was first introduced. They found two major effects of cross-country diffusions, namely the country and time effects. The country effect indicates that diffusion rate in the high context culture (Hall 1981) is faster than in the low context culture. This implies that the 2-G cellular phones in Japan are expected to diffuse faster than in the low context culture such as European countries and the United States. The time effect refers to the fact that diffusion in the lead country, meaning the country where innovations are first introduced, is slower than that in the lagged country. This implies that diffusion of the 2-G cellular phones in the countries rather than Japan tends to be faster than that in Japan.
Franchise Development Program: Progress, Opportunities and Challenges in the Development of Bumiputera Entrepreneurs in Malaysia
Issues related to the involvement of Bumipiteras in the development of the country, mainly in the business sector has received attention from the ruling government since the country’s independence. Before independence, the policy used by the English has caused Bumiputeras to be left far behind in many aspects, when compared to other races. In order to solve this problem, the government launched the New Economic Policy (NEP) which focuses to eliminate poverty and to reorganize the many races in Malaysia. The era of NEP is replaced by the National Development Policy (NDP) that aims to continue where NEP left of. Under NDP, the government designed programs that increased the numbers of Bumiputeras in the trading sector through the Bumiputera Community Trade and Industry Plan (BCTIP). In parallel to the outlined strategy in the resolution of the Third Bumiputera Economic Congress held in 1992, this paper will attempt to evaluate and analyze the achievements and opportunities in the franchise development program that is a vital mechanism which encourages Bumiputera involvement and contribution to the nation’s economy. This paper will also attempt to view the main challenges faced by Bumiptera entrepreneurs in the franchise development program. Issues relating the involvement of Bumiputera and country development started to gain the ruling government attention since independence was achieved. English policies before independence clearly left the Bumiputeras behind in many areas when compared to other races. Realizing the fact that national unity could only be achieved if the riches of the nation are equally shared among all races, the Bumiputera Economy Development agenda was given attention in the economic development plan of the nation. The involvement of the government in this area started since the first Prime Minister, Tengku Abdul Rahman and has continued until today. In realizing that the pattern of wealth distribution that is unequal will effect national unity, such in the May 13 Tragedy in 1969, the government designed the New Economic Plan (NEP) (1970-1990). This plan aims to eliminate poverty and restructure the community in Malaysia. Although NEP does not state the exact number of entrepreneurs to be produced, the public statement to see 30% national equity ownership is a step taken by the government to encourage active involvement of Bumiptera in the trade and industry sector. Unfortunately at the end of the NEP in 1990, the Bumiputera only managed to accumulate 20.1% of the nation’s wealth. The main reason for not achieving the 30% goal is the economic crisis that hit Malaysia in the middle of 1980. The beginning of the National Development Policy (NDP) marks the end of the NEP era. Again, the government designed special programs to increase the number of Bumiptera in the trade sector. The Bumiputera Community Trade and Industry Plan (BCTIP) were formed to achieve this goal. In the Seventh Malaysian Plan, the government has devised new strategies to develop the BCTIP. Several programs are executed in order to achieve NDP’s objective such as establishing small and medium size industries (IKS) that has competitive and enduring qualities in a strategic economic sector. To achieve this objective, several strict conditions are enforced to those interested and eligible to join BCTIP. In parallel with the strategy outlined in the 2nd Bumiputera Economic Congress resolution, 1992, this paper attempts to evaluate and rethink the achievements, opportunities, challenges in the Franchise Development Program (FDP) as an important mechanism that encourages Bumiputeras to take an active role in the development of the nations economy. The franchise trading system is identified as one of many shortcuts used to maximize the number of Bumiputera entrepreneurs and businessman, which in turn would cause Bumiputera middle class numbers to increase. The Prime Minister while launching the FDP on 27 January 1994 also noted that: “Today’s business and trade world is becoming more challenging, more competitive and more advance. Small self owned businesses will at random not be very fruitful. Today we live in a world of ‘giants’. Hence, the Bumiputeras have to enter a very large business and entrepreneurship field and have to be ready to bare reasonable risks. To excel more, any business has to be handled wisely, systematically, efficiently and widely as if creating branches or networks. One of the approaches taken for good businesses is through the franchise system. This system might allow Bumiputera involvement with no extremely high risk.” In this context, the government developed the Franchise Development Program as a strategy in the development of a society of Bumiputera entrepreneurs that has the ability to withstand and excel in the business world. With the involvement of the private sector, under the Malaysian Privatization concept, opportunities arise for the Bumiputera to set up franchise businesses that stress on uniformity and quality of the goods and services provided. However the situation, only those that are determined, independent, strong, disciplined and wise in management skills could ensure the success of a franchise business.
Developing a Computer Networking Degree:
Bridging the Gap Between Technology and Business Schools
Dr. Karen Coale Tracey, Central Connecticut State University, New Britain, Connecticut
The idea of integrating curriculum and collaboration between academic disciplines is not a new concept in higher education. Interdisciplinary learning, teaching, and curriculum came to the forefront as part of the progressive educational movement of the early twentieth century. Multidisciplinary and interdisciplinary programs can foster, accelerate, and sustain constructive change in academia and student learning (Ellis& Fouts, 2001). The purpose of the paper is to describe the proposal for the Bachelor of Science in Computer Networking Technology degree at Central Connecticut State University (CCSU). CCSU is a regional public university that serves primarily residents of Central Connecticut. It is one of four regional public universities offering higher education in the state. CCSU’s location in the center of the state means that the entire population of the state is within 75 miles of its location in New Britain. Connecticut is one of the smallest states in land area. Its land area of 4845 square miles makes it the third smallest state in terms of area (World Almanac, 2002). The greatest east-west distance in the state is approximately one hundred miles. The greatest north-south distance is approximately seventy-five miles. Connecticut’s population of approximately three million makes it the twenty-first smallest state in terms of population (U.S. Bureau of Census, 2000). Its population growth during the last decade (1991-2000) was 3.6 percent, which is noticeably less than the 13.1 percent growth in the U.S. CCSU is located approximately 2-3 hours from Boston and New York City. CCSU is divided into five academics schools: Arts/Sciences, Business, Professional Studies, Technology, and Graduate. CCSU enrolls approximately 12 thousand students. About two thousand of these students are enrolled in the Business School and 900 in the School of Technology. Most CCSU students (about three quarters) are undergraduate students (CCSU, 2002). Ninety-five percent are Connecticut residents. Twenty-two percent live on campus. Sixty-eight of the full-time students receive need-based financial aid (Morano, 2002). There is not an agreement on the meaning of multidisciplinary interdisciplinary programs, but Beggs (1999) provides a guide. He describes a discipline as a body of knowledge or branch of learning characterized by an accepted content and learning. Research, problem solving, or training that mingles disciplines but maintains their distinctiveness is multidisciplinary. Practically speaking, faculty from at least two disciplines who work together to create a learning environment and incorporate theory and concepts from their respective academic disciplines can be categorized as interdisciplinary. The creation of an international field course is a platform for students from different disciplines to interact. The result is a broad picture of discipline in an international content. Researchers have found much strength in interdisciplinary curriculum (Anderson, 1988). Interdisciplinary curriculum improves higher-level thinking skills and learning is less fragmented, therefore students are provided with a more unified sense of process and content. Interdisciplinary curriculum provides real-world applications and team building, hence heightening the opportunity for transfer of learning and improved mastery of content. Interdisciplinary learning experiences positively shape learners’ overall approach to knowledge through a heightened sense of initiative and autonomy and improve their perspective by teaching them to adopt multiple points of view on issues. Ellis and Fouts (2001) summarized the benefits of interdisciplinary curriculum: The interdisciplinary curriculum improves higher-level thinking skills. With the interdisciplinary curriculum, learning is less fragmented, and therefore students are provided with a more unified sense of process and content. The interdisciplinary curriculum provides real-world applications, hence heightening the opportunity for transfer of learning. Improved mastery of content results from interdisciplinary learning. Interdisciplinary learning experiences positively shape learners' overall approach to knowledge through a heightened sense of initiative and autonomy and improves their perspective by teaching them to adopt multiple points of view on issues. Motivation to learn is improved in interdisciplinary settings. The proposal for the BS in Computer Networking Technology is in response to the rapid changes “high technology” fields and the high demand for information technology workers. Campuses and schools are increasingly wired, as students and teachers look to computers and the Internet to supplement other methods of teaching and learning. Technology is also becoming an important part of business and education administration, as networks provide a means to manage these enterprises. Technology is changing the face of education. Today’s students want to learn skills that will make them highly marketable in the Internet Economy. As a result, there is increased emphasis on skills development as well as on gaining knowledge and understanding. Industry predicts that there will be a shortage of approximately 350,000 information technology workers. Listed below are summaries found on the Internet that also provide statements on the critical need for information technologists: Central Connecticut State University and the Department of Computer Electronics and Graphics Technology are meeting the needs of the State of Connecticut to fill many of the positions listed in the “Status of Connecticut Critical Technologies Report” (March 1, 1997). In this report, Information Technology workers are identified as a necessity to support aerospace and manufacturing industries.
It is now well established by academic scholars that property rights are a necessary requirement for the function of market-based economy. (Alchian & Demsetz, 1973, Drahos 1996) Over the last two centuries or so this principle has been extended to intellectual Property Rights (IPR) which include patents, copyrights, trademarks, brands etc..(Abbott Et Al, 1999) That importance is shown by the monetary and competitive gains generated by brand equity. However the definition and protection of intellectual property rights is also one of the most complex issues that is the subject of international negotiations because its acceptance has not always been universal. (May, 2000). Criticisms of the extension of IPR include, among other things, its impact on free trade and competition (Maskus2000, Maskus & Lahoual 2000). This paper argues that recent court cases and agreements like TRIPS (Gervais, 1998) may lead to the erosion of the fundamentals of property rights per se and by implications to the attributes of the market. Specifically it addresses the issues of competition and the rights of buyers and consumers. The focus of the paper is trademarks and brands, in particular it addresses the issue of gray marketing and the implications for global marketing management (Clarke & Owens, 2000) and innovation of science based products like pharmaceuticals. (Rozek & Rapp, 1992, Grubb, 1999) Finally, the paper will argue that it is far better for companies to use marketing tools, rather than courts, in order to protect their brand and trademarks equity.
Copyright 2000-2017. All Rights Reserved