The Journal of American Academy of Business, Cambridge
Vol. 3 * Num.. 1 & 2 * September 2003
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Online Computer Library Center * OCLC: 805078765
National Library of Australia * NLA: 42709473
Peer-Reviewed Scholarly Journal
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: firstname.lastname@example.org; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2018. All Rights Reserved
The Relationship Between Organizational Climate and Organizational Culture
This multi-method study explored the relationship between organizational climate and organizational culture in a newly emerging university. Organizational climate was explored through the distribution of a survey to 145 academic staff. An 88% response rate yielded 128 responses. To uncover the organizational culture, semi-structured interviews were conducted with the Deputy Vice-Chancellor, the Deputy Principal, 7 Deans, and 15 Centre Heads from the various faculties. The study uncovered the ways in which organizational culture evolves and becomes intertwined with organizational climate. The data yielded new insights as to the ways in which organizational climate and culture intersect. This has particular relevance at the sub-unit level where climate features are most positive in those faculties whose subcultures are congruent with the leadership culture, and are least positive in faculty subcultures that are incongruent with the leadership. Organizational climate and culture have been important constructs in organizational theory for about thirty years (Moran and Volkwein, 1988; Schein, 1992) although relatively few researchers have chosen to study them concurrently, either conceptually (Moran and Volkwein, 1992) or empirically (Turnipseed, 1992). Yet culture and climate research in the last ten years have enriched our understanding of organizational theory. There is a good deal of conceptual blurring in the literature when it comes to key terms such as organizational culture and climate organizational (Falcione and Kaplan, 1984: 285; Jablin, 1980: Schein, 1990). For example, belief systems, which are regarded as central to organizational climate, are ultimately derived from prevailing value systems and therefore, must somehow be associated with organizational culture. Hence, the possibility of reciprocity between culture and climate was a key focus of this multi-method study which examined the extent to which organizational culture can be inferred from the behavioural features of an organization as manifested in the organizational climate. An examination of the specific relationship between culture and climate begins with actual perceptions of organizational events as encapsulated in Koys and DeCotiis’s (1991) dimensions of climate but not individuals’ interpretations of those events. For this to occur, the research design calls for a qualitative approach, whereby the meaning of various levels of discourse may be analysed using, for example, an extensive interviewing situation with senior academic staff. Such indepth probing leads to the surfacing of underlying assumptions, cognitions and feelings (Sackmann, 1991) of organizational culture yet uncovers only a portion of it (Clawson, 2002). Second, and of greater interest, is the potential for exploring the reciprocity of these same two variables. Organizational climate is seen as a descriptive construct, reflecting consensual agreement amongst members regarding key elements of the organization in terms of its systems, practices and leadership style. The definition of organizational climate which best fits the present study is offered by Moran and Volkwein (1992) and is a amalgam of elements from definitions derived from the cited work of Forehand and Gilmer (1964), Pritchard and Karasick (1976) and DeCotiis and Kays (1980). Organizational climate is a relatively enduring characteristic of an organization which distinguishes it from other organizations: and (a) embodies members’ collective perceptions about their organization with respect to such dimensions as autonomy, trust, cohesiveness, support, recognition, innovation and fairness; (b) is produced by member interaction; and (c) serves as a basis for interpreting the situation; (d) reflects the prevalent norms, values and attitudes of the organization’s culture; and (e) acts as a source of influence for shaping behaviour (Moran and Volkwein, 1992: 20). Poole, (1985: 84) regards climate to be ‘an empiricist substitute for the richer term ‘culture’ [and concludes] climate seems to be a feature of rather than a substitute for culture’. The two concepts may be viewed as being distinct, a function of, or reaction to one another (Hughes, Ginnett, Curphy, 2002). While the concept of organizational culture is not new and can be linked to its anthropological origins (Smircich and Calas, 1987), as an organizational variable, it appears to have enjoyed a somewhat shorter history (Schein, 1990) than climate. Schein (1992: 10) defines culture as ‘the accumulated shared learning of a given group covering behavioural, emotional and cognitive elements of the group members total psychological functioning’. As such, culture is taken to represent the group members’ accumulated learning. Organizational culture is shaped by the leader’s values, which selectively direct and influence its development (Sarros, 2002). Overall, the concept of organizational culture appears in many ways to be more diffuse than that of climate. As Smircich and Calas (1987: 245) note ‘the organizational culture literature is full of competing and often incompatible views. Functional, interpretive and critical voices are all speaking at the same time’ reflecting differing worldviews and epistemologies. This study acknowledges the influence of the anthropologist Clifford Geertz (1973: 5) who likens culture to a web, suggesting ‘Man is an animal suspended in webs of significance he himself has spun’. The analysis of culture, therefore, can be an interpretive search for meaning within a pattern of symbolic discourse. There are many similarities between organizational climate and culture although a number of researchers have considered and rejected the proposition that they are synonymous (Moran and Volkwein, 1992; Schneider and Synder, 1975). Yet, because the two variables share a number of overlapping attributes the distance between culture and climate is perhaps not so great as first thought. According to Ashforth (1985: 841), ‘It is not a large conceptual step from shared assumptions (culture) to shared perceptions (climate)’.
The Perceived Threats to the Security of Computerized Accounting Information Systems
Dr. Ahmad A. Abu-Musa, KFUPM, Saudi Arabia
The rapid change in computer technology, the wide spread of user-friendly systems and the great desire of organizations to acquire and implement up-to-date computerized systems and software have made computers much easier to use and enabled accounting tasks to be accomplished much faster and accurate than hitherto. On the other hand, this advanced technology has also created significant risks related to ensuring the security and integrity of computerized accounting information systems (CAIS). The technology, in many cases, has been developed faster than the advancement in control practices and has not been combined with similar development of the employees’ knowledge, skills, awareness, and compliance. In this paper, a general overview of the CAIS security threats will be presented, the different classifications of security threats will be outlined and causes of security violation will be briefly highlighted. Finally, the approaches and techniques of CAIS security abuse will be discussed in some details. The rapid change in computer technology, the wide spread of user-friendly systems and the great desire of organizations to acquire and implement up-to-date computerized systems and software have made computers much easier to use and enabled accounting tasks to be accomplished much faster and accurate than hitherto. On the other hand, this advanced technology has also created significant risks related to ensuring the security and integrity of the computerized accounting information systems (CAIS). The technology, in many cases, has been developed faster than the advancement in control practices and has not been combined with similar development of the employees’ knowledge, skills, awareness, and compliance. In this paper, a general overview of the CAIS security threats will be presented, the causes of security violation will be briefly highlighted and the different classifications of security threats will be outlined. Finally, the approaches and techniques of CAIS security abuse will be discussed in some details. Parker (1983) argued that, according to Jackson’s law, anything hit with a big enough hammer would break! When it comes to computers, their facilities, storage media, computer programs, people, or data, the hammer need not be very large, because they are all fragile and becoming more so. In addition, because of their great processing capabilities, the concentration of data and the speed of operation there are many possibilities to do harm in, to, or with a computer. These possibilities are also extended over great geographic distances by the increasing use of data communications capabilities that connect many computers and terminals in a network (p. 41). Davis (1996) mentioned that great changes in computer technology are occurring with greater frequency than ever before, and many of these changes are being adopted into organizations’ accounting information systems. These technological advancements have created new security threats to CAIS. Davis developed a list of sixteen security threats to be investigated in the real world. Schweitzer (1987) considered the main security threats of electronic information to be: Loss of information privacy; Theft of information; Unauthorized use of information; Fraudulent use of information and computers; Loss of information integrity as a result of unauthorized intentional change or manipulation of data; Loss of computing services due to unauthorized or intentionally malicious actions. Haugen and Selin (1999) classified the common types of computer-based fraud under the following six categories: Altering input: Altering input does not require extensive computer skills; the perpetrators only need understand how the system operates to cover their tracks. Theft of computer time: Using a computer system for unauthorized purposes, such as running a personal business or keeping little league statistics, constitutes fraud, even though in many cases the individual is not aware that they are doing anything wrong. Software piracy: It has been estimated that for every legal copy of software there are from one to five illegal copies, costing the software industry between $2 and 4 billion a year (Levi, 1993). Altering or stealing data files: Data can be changed, deleted, scrambled or manipulated, often by disgruntled employees, to reduce value or eliminate derogatory impact. It can also be stolen or replicated and marketed to competitors, or to others that could gain a competitive advantage. Theft or misuse of computer output: Local area networks expose computer-generated output to a larger audience with shared printers, usually maintained in a public location for ease of access. Desktop screens are often easily observable, and output sent through interoffice mail is subject to interception. The more sensitive the information contained on the output, the more care and control is needed. Unauthorized access to systems or networks: With the proliferation of Internet usage, and the flexibility and ease of use found with most networked systems, care needs to be taken to restrict and protect sensitive files. Networks are particularly vulnerable to hackers taking advantage of the weak security provided for dial-in and remote access. There are many internal and external forces which could cause security breach of CAIS. Most of the internal reasons relate to internal employees and staff, who can access the organisation’s assets and accounting systems. Internal organisational and technical problems, such as poor internal controls, poor personnel policies and practices, and poor examples of honesty at the top levels of an organisation are essential causes of security threats. They might include inadequate rewards and compensation plans, inadequate management controls, inadequate reinforcement and performance feedback mechanisms, inadequate support, inadequate operation reviews, lax enforcement of disciplinary rules, fostering of hostility and other motivational issues. According to Haugen and Selin (1999) there are many reasons that might make employees commit computer crimes and steal from the business for which they work, the more common reasons being revenge, overwhelming personal debt, substance abuses and lack of internal controls. Business today is very competitive, and employees can feel very stressed. As a result, they have feelings of being overworked, underpaid and unappreciated. If employees are also struggling with serious personal problems, their motivation to commit fraud may be very high. Add to the equation poor internal controls and readily available computer technology to assist in the crime, and the opportunity to commit fraud is now a reality. Assessing the risk to the organisation of computer crime is sometimes difficult, but by initiating a proper internal control system, including good employment practices and training programmes, organisations can take a proactive stance in warding off computer crime and keep losses to a minimum. The following list of violation causes is based on the work of many experts in the field of electronic information security (although it probably remains incomplete):
The High Tech Global Accounting Classroom in the 21st Century
Dr. Dhia D. AlHashim, California State University, Northridge, CA
Dr. Siva Sankaran, California State University, Northridge, CA
Dr. Earl J. Weiss, California State University, Northridge, CA
The globalization of businesses, the increasing complexities of business transactions, and advances in information technology are facilitating electronic commerce with far reaching economic, social and political implications. They are also bringing new tools to carry out the missions of educational institutions. These high tech developments are placing a continual demand on colleges and universities to update their curriculum in order to maintain the relevance and usefulness of traditional accounting curricula and the integral standards imposed by both national and international accreditation agencies. The objective of this paper is to investigate the progress being made to incorporate technology into the accounting curriculum. A historical review of the use of technology in the classroom illustrates how far we have come in such a relatively short period of time. Up until the middle of the 20th century, technology for accounting instruction in the classroom consisted of chalk and talk. The decades of the 1950s and 60s saw little change other than the rarely used flip chart on an easel and audio visual equipment such as movie, filmstrip, and opaque projectors. During the 1970s, video tapes and overhead transparencies began to make their appearance in many classrooms. It was not until the late 1970s that technology, by today’s standard, began to surface in the form of microcomputers. Although microcomputers were still a long way from becoming a mainstay in the classroom and having any significant impact on pedagogy, the future of technology was now beginning to appear on the horizon. During the 1980s, microcomputers (the name would soon be shortened to “computers”) and software began being developed at an exponential rate with the use of DOS and more sophisticated word processing, database, spreadsheet, and integrated accounting software. Many universities had established computer labs by the end of the 1980s, which permitted accounting instructors to begin to incorporate computer applications into the curriculum. However, actual in-class technology through the use of computers was still a few years away. By the end of the decade, computer assisted instruction (CAI) and computerized tutorials and practice sets were beginning to become a popular use of computers to supplement the instruction outside of class. During the 1990s, several phenomena occurred that would forever change technology in and out of the classroom: (1) The fine tuning of Windows as the universal operating system made the use of computers more intuitive and user-friendly; (2) the latest generation of students were entering college more computer literate (as the 1990s progressed, more and more students owned computers); (3) the development of PowerPoint and other presentation software used with computer data show projectors (LCDs) made instructional delivery more effective and visually appealing; and (4) the creation of the information highway of the Internet and World Wide Web resulted in almost unlimited access to information and a renaissance in distance learning. The 1990s also witnessed the coming and going in only one decade of CD-ROM authored courses and tutorials, some of which have been salvaged through conversion to the Web. A current and emerging issue that international accountants must address more assertively is information technology (IT). IT allows corporations to overcome the barriers imposed by time and distance. IT, therefore, increasingly affects the value of professional accountants’ services and their ability to remain competitive in the marketplace. Recognizing this fact, the International Federation of Accountants (IFAC) created a permanent Information Technology Committee, with the objective of increasing “accountants’ information technology competency and their awareness of technological developments and applications by facilitating relevant research, enhancing global communications and providing guidance on technology-related issues impacting accountants, their employers and clients.” This Committee issued two International Information Technology Guidelines, namely, Managing Security of Information (January 1998) and Managing Information Technology Planning for Business Impact (January 1999). In addition, in December 1999, this Committee issued three “Exposure Drafts,” namely, Acquisition of Information Technology, Implementation of Information Technology Solutions and IT Service Delivery and Support. On the other hand, to assure that accountants are appropriately trained in this important area, the Education Committee of IFAC issued in November 1995 (revised in June 1998) International Education Guideline (IEG) 11, Information Technology in the Accounting Curriculum, which establishes a framework for organizing IT-oriented education for accountants. Furthermore, the Financial and Management Accounting Committee of IFAC has released in 1996 a Practice Statement, Strategic Planning for Information Resource Management, which will help management accountants effectively use information resources and systems. The October 1999 issue of IFAC’s Education Network presented an abstract of a joint presentation at the 1999 Annual Conference of the European Accounting Association by Professors Bob Hoffman of San Diego State University and Linard Nadig of the University de Fribourg in Switzerland on “Technological Challenges in Developing a Synchronous Distance Learning International Accounting Class.” The authors presented a distance learning accounting class bringing together students from Japan, Spain, Switzerland and the United States, through which students and faculty members made face-to-face contact by communicating through multi-point audio and video conferencing. In January 2000, IFAC’s Education Committee issued a discussion paper titled “Quality Issues For Internet And Distributed Learning in Accounting Education” discussing the environment that encourages the design, development and delivery of high quality Internet and distance learning in global accounting education. The term “distributed learning” was considered to be more relevant than the term “distance learning,” since the former refers to both time and distance, whereas the latter refers only to distance, causing readers to think that it does not apply in situations where users are in close geographic proximity. In November 1999, the International Accounting Standards Committee (IASC) published a “Discussion Paper,” Business Reporting on the Internet, which examined the current technologies available for electronic business reporting. The expectation in the near future is that IASC will be able to develop a set of standards for electronic business reporting, since it has been recognized that global investors and lenders have been greatly utilizing Internet corporate reporting.
The Integration of Gender and Political Behavior into Hambrick and Mason's Upper Echelons Model of Organizations
Daun Anderson, Regent University, Virginia Beach, VA
A top management team’s (TMT) characteristics are determinants of the strategic choices that it makes, and these choices determine organizational performance. This paper integrates gender and political behavior, as mediating and moderating variables, respectively, into Hambrick and Mason’s (1984) upper echelons model of organizations. The author offers a new model that includes gender and political behavior, based on a literature review of the impact of these variables on decision making. She posits that women’s leadership styles result in innovation and comprehensiveness in analyzing strategic alternatives. She also posits that the amount of political behavior within a TMT moderates the effect of its members’ characteristics on the strategic choices that they make. Hambrick and Mason’s (1984) upper echelons model of organizations proposes that members at the upper echelons of an organization, who are also known as the top management team (TMT) (the author uses the terms interchangeably throughout this paper), formulate and implement strategic choices. The underlying assumption of the model is that upper echelon characteristics, both psychological and observable/demographic, are determinants of these strategic choices, which, in turn, determine organizational performance. This paper adds gender to the list of demographic mediating variables, and political behavior as a moderating variable, thereby creating a new model that more thoroughly depicts the determinants of organizational performance. The basis for Hambrick and Mason’s (1984) seminal work on the upper echelons of organizations is the Carnegie School view that complex decisions are the result of behavioral factors, and that strategic decisions reflect the decision makers’ values and characteristics (Cyert and March, 1963; March and Simon, 1958). Andrews’ (1971) view that people construct strategy, and Child’s (1972) strategic choice perspective, support the Carnegie School view. Cyert and March reinforced the idea of people working together when they introduced the concept of the dominant coalition, and Bourgeois (1980) later used this term to refer to the TMT. Given the significance of the TMT’s role in determining organizational strategy, it is important to understand how its members go about this process. The paper begins with a description of Hambrick and Mason’s (1984) upper echelons model. A literature review of the relationship between five of the TMT demographic attributes in the model, and strategic decision making, follows the description. The author then presents a justification for adding gender to the mediating demographic attributes. A literature review of the impact of political behavior on decision making follows that discussion and explains how political behavior moderates the effect of upper echelon characteristics on the strategic choices that TMTs make. Graphic depictions of three models - the original Hambrick and Mason model, a modified model that includes gender, and a second modified model that includes both gender and political behavior - appear throughout the paper. The author offers a research method that might test the proposed impact of gender and political behavior on strategic choice, and she suggests three areas for future research. Figure 1 illustrates that the determinants of organizational performance are many and varied, beginning with the independent variable of the objective situation, or an organization’s external and internal environments. Hambrick and Mason (1984) propose that upper echelon characteristics are a reflection of the situation in which an organization exists. Those characteristics are both psychological and observable. The former include one’s cognitive base and values, while the latter include age, functional track, other career experiences, education, socioeconomic roots, financial position, and group characteristics. These characteristics are key mediating variables as determinants of the TMT’s strategic choices, which become determinants of organizational performance. Even authors who posit that the environment constrains an organization acknowledge that organizational leaders assess their environment and proactively implement changes to adapt accordingly (Pfeffer and Salancik, 1978). According to Hambrick and Mason, it is the interaction between the situation, upper echelon characteristics, and the strategic choices that the TMT makes, that determine organizational performance, as measured by profitability, growth, and survival. Hambrick and Mason (1984) found an association between age and corporate growth, leading to their proposition that firms with younger upper echelon managers will use riskier strategies than firms with older managers. They believed that, as executives become older, they become more conservative and committed to the status quo. Other researchers agree that TMTs with young members will engage in more technological and administrative innovations (O’Reilly and Flatt, 1989, as cited in Hambrick, 1994; Bantel and Jackson, 1989). Young TMT members are also more likely to implement changes in corporate strategy (Wiersema and Bantel, 1992), to promote international diversification (Tihany, Ellstrand, Daily, and Dalton, 2000), and to favor riskier acquisition candidates (Hitt and Tyler, 1991). The study of the relationship between functional track and strategic decision making began several decades ago when Dearborn and Simon (1958) found that managers’ functional backgrounds affect the way that they interpret critical problems. According to Hambrick and Mason (1984), TMT members who specialize in marketing, sales, and product research and development are likely to pursue strategies that emphasize a search for new opportunities, supporting Gupta and Govindarajan’s (1984) view that there is a positive association between TMT experience in marketing and sales, and a strategy of growth. TMT members whose backgrounds are in production, process engineering, and accounting focus on automation, plant and equipment, and backward integration.
Are Leadership Styles Linked to Turnover Intention: An Examination in Mainland China?
Dr. Jovan Hsu, Tongji University, Shanghai, China
Dr. Jui-Chen Hsu, Chia Nan University of Pharmacy and Science, Tainan, Taiwan
Dr. Shaio Yan Huang, Providence University, Taichung, Taiwan
Dr. Leslie Leong, Central Connecticut State University, New Britain, CT
Dr. Alan M. Li, City University of Hong Kong, Kowloon, Hong Kong
This research investigates how leadership styles affect turnover intention based on 127 valid employee surveys from three major internet companies in Mainland China – the People’s Republic of China (PRC). This study will be one of the few research, if any, that looks into the relationship between leadership styles and turnover in Mainland Chinese context. Three different leadership styles – instrumental leadership, supportive leadership, and participative leadership were adopted from House and Dessler’s (1974) Path-Goal leadership model. It was found that there was a significant negative relationship between leadership styles and turnover intention as well as a significant negative relationship between each component (instrumental leadership, supportive leadership, and participative leadership) and turnover intention. It was also found that there were no significant differences between technical employees and non-technical employees in the relationship between leadership styles and turnover intention as well as no significant differences between managerial employees and non-managerial employees in the relationship between leadership styles and turnover intention. This study seeks to investigate the relationship between leadership styles and turnover intention in China’s dot.com industry. Leadership has been one of the most popularly studied constructs in management field. Yet, most of the leadership studies are conducted in western settings and limited studies could be found using a Mainland Chinese sample. This study adopts the House et al. (1974) Path-Goal leadership model to explore the voluntary turnover situation in Chinese Internet sector during this critical time where a bubble-burst economy has caused a personnel shake-up. The purpose of this study was to examine whether there was relationship between the instrumental, supportive, and participative components of leadership and turnover in China’s dot.com Industry. In this study, the Path-Goal leadership theory (House et al., 1974) was tested among employees, concerning 1) the relationship between leadership styles and turnover intention, 2) the relationship between each of the three dimensions of leadership styles, namely, instrumental leadership (IL), supportive leadership (SL), and participative leadership (PL) and turnover intention, 3) differences between technical employees and non-technical employees in the relationship between leadership styles and turnover intention, and 4) differences between managerial employees and non-managerial employees in the relationship between leadership styles and turnover intention. The result revealed a significant negative relationship between leadership styles and turnover intention as well as a significant negative relationship between each component (instrumental leadership, supportive leadership, and participative leadership) and turnover intention. The result also revealed no significant differences between technical employees and non-technical employees in the relationship between leadership styles and turnover intention as well as no significant differences between managerial employees and non-managerial employees in the relationship between leadership styles and turnover intention. The core theory of this study is the Path-Goal Theory of Leadership, which is one of the contingency theories of leadership. Typically, leadership theories are broken into three categories: trait theory, behavioral theory, and contingency theory. Firstly, the earliest leadership research was initiated as trait theory, starting as early as the 1900s to 1940s. In trait theory, leadership is defined with regards to personality and its effects on the group and emphasizes the importance of the leader as an individual to whom the group is largely subservient (Bingham, 1927; Stogdill, 1975). It described that leadership grows out of group processes and problems and an instrumentality of group goal attainment (Cooley, 1902; Pigors, 1935; Stogdill, 1975). These theorists proposed leadership traits such as dependability, dominance, energy, intelligence, self-confidence, social activity, and talent (Bass, 1990). However, trait theory did not stand long due to its dubious practical value and inconsistent results (Kiechel, 1986). Secondly, behavioral theory focused on how the leaders acted while trait theory focused on inherent and unobservable traits (Yukl, 1989). There are two schools that represent the behavioral theory as the main stream. In the late 1940s, Ohio State University proposed two dimensions for leadership – the initiating-structure dimension where leader defines the leader-subordinate roles for subordinates and the consideration dimension where leader is concerned with and respects the subordinates’ feelings (Bower & Seashores, 1966). University of Michigan also proposed two dimensions for leadership – the employee-oriented dimension where leader is concerned with the well-being of subordinates and production-oriented dimension where leader focuses on performance (Kahn & Katz, 1960). However, behavioral theory overlooked the complexities of individual behavior in organizational settings. Thirdly, contingency theory represented leadership in a versatile fashion and conceptualized leadership as accommodating the complexities of leadership due to the impact of various situations (Hodgetts, 1991). There are five branches for contingency theory including Fiedler’s Contingency model, the Normative Decision model, Hersey and Blanchard’s Situational Leadership Theory, Leader-Member Exchange (LMX) Theory, and the Path-Goal Theory.
Dr. Ahmad Abd El-Salam Abu-Musa, Tanta University, Egypt and KFUPM, Saudi Arabia
Examining Consumer Behavior in Food Industry: An Anthropological Descriptive Follow-Up Case Study
Consumer behavior is the study of human responses to products, services, and the marketing of the products and services. The different consumers may act ethically or unethically in a given situation. This is a follow-up study on the unethical behavior and its effect in the Erskine College Cafeteria. It is concluded that students’ attitudes toward the unethical behavior are negative, however, they still participate in unethical behavior. Punishment is a reasonable solution to the problem. Punishment will make the students think before they act in an improper manner. Consumer behavior is the study of human responses to products, services and the marketing of the products and services (Kardes 2002). In consumer science, ethics is the study of standards of conduct and moral judgment; individual consumers decide their ethical or unethical decision based on their moral values, although situational influences can make it difficult for individuals to invent rationalizations for unethical behavior. Moral worth is a behavior that is determined by its consequences and by the results of a particular action. The individual consumers have the choice to make the ethical or unethical decision in any given situation, as ethical and unethical behavior go hand in hand academically. Consumers’ unethical behavior can also be termed as consumer misbehavior, define consumer misbehavior as behavior in exchange settings, which violates the generally accepted norms of conduct in such situations. Misbehavior by consumers disrupts the openness, impersonal trust, and orderliness of the ideal exchange environment. Misbehavior by consumers challenges some of the very foundations of contemporary consumer society: its inherent norms and role expectations, the legitimacy of marketers to establish boundaries and the overall capacity of the system to function smoothly (Punji & Fullerton 1997). A major stream of research in consumer ethics includes studies attempting to examine consumer attitudes toward a variety of potentially unethical situations. Assess and manipulate consequences (the things that happen before the behavior occurs) of the desired behavior, such as precise praise or feedback, keeping in mind the principles of shaping and reinforcing incompatible behaviors, is necessary for the business managers in dealing with unethical misbehaviors (Hoffman 2001; Vitell et. al., 2001). It is suggested that the customer value extrinsic or intrinsic be examined: intrinsic is the relating to the essential nature of a thing, thus inherent; extrinsic is not forming from an essential or inherent part of a thing, thus extraneous. Ethical action involves doing something for the sake of others with concern for how it will affect them or how they will react. The motivation for such action is intrinsic because virtue is its own reward (Smith 1996). Consumers go through many concerns in making decisions in their consumption behavior. The consumer had ethical concerns on decision-making, purchases, and other aspects of consumption in the market place. Concerns may be humane, religious, personal, or environmental (Martin 1993; Holbrook 1993). These concerns may create a dilemma between what is good for one’s self and what is good for other people. What may be good for one’s self may be bad for another and what is bad for one’s self may be good for other people. For consumers to make ethical decisions, they apply both deontological norms and teleological norms (Marks and Mayo 1991; Schiffman and Kanuk 2000). Deontological norms are personal values about what is “right” and what is “wrong.” Teleological norms are the consideration of what consequences are likely to occur and how good or bad the consequences will be for others. Consumers are likely to combine both a deontological and a teleological evaluation to derive a final judgment about an ethical decision. Marks and Mayo (1991) also suggest people may intend to choose a less ethical alternative when it leads to a preferred consequence. The less ethical alternative may result in some personal gain. Vitell noted that actual behavior might not be consistent with the most ethical choice because of situational conditions that consumers may perceive as “enabling” them to engage in unethical behaviors (Vitell, et. al., 2001). When people choose an unethical behavior they may feel guilty; this feeling of guilt and the unethical decision that was made influences the consumer’s future behavior (Marks and Mayo 1991). To eliminate the problem of consumer unethical behavior in business sites managers need to adopt reactive approaches, for example, social skill programs. Hoffman (2001) once studied misbehaviors by college students and suggested that it is necessary to develop and implement behavioral intervention plans to correct student misbehaviors. He points out that the school administration needs to understand why the student misbehaved; knowing what compels a student to engage in particular behavior is integral to the development of effective, individualized positive behavioral intervention plans and supports. This study, through an ethical point of view about consumer’s behavior, presents a descriptive examination of unethical behaviors of college students at a campus food service site. This study is a follow up of a previous study by Deason & Tian (2003, also see Tian, et. al. 2002), and is designed to examine how unethical behavior effects students, management, and peers. The study conducted is about the unethical behavior existing in the Erskine College campus Cafeteria. This paper is about implementing ideas to make Erskine’s Cafeteria a more enjoyable experience.
A Market-Oriented Approach to Inculcating the Value of Academic Excellence Among Minority Students Enrolled in Public Schools
Dr. Lee R. Duffus, Florida Gulf Coast University, Fort Myers, FL
Dr. Joe Cudjoe, Florida Gulf Coast University, Fort Myers, FL
This article examines the factors that influence academic under-performance among minority students, and presents (a) a marketing-oriented approach to inculcating the value of academic excellence among them. The study results show that a goal- and peer-based learning system that utilizes marketing principles, emphasizes family involvement, individual performance goals, and provides tangible rewards, peer and public recognition for goal achievement will produce high levels of academic performance. Furthermore, this peer network provides students with positive performance benchmarks and supports development of a peer culture of success characterized by strong self-efficacy among its members. The marketing process provides a systematic method to connect consumers with ways to effectively and efficiently meet their needs. As such, it is an appropriate process to apply to connect under performing students or student groups with ways to make substantial progress toward achievement of academic success in their public school education. This research describes a market-oriented effort to create a peer culture of success among minority students, and reports on a) self assessment and performance data collected from the students after 10 years of program operation, and b) comparative performance with a baseline group of mostly white students. Nationally, there were almost 47 million PK-12 public school students in 2001(NCES, 2002). Minorities represent 22% of the population and 30% of public school enrollment, and their academic performance continues to lag that of their non-minority counterparts (Digest of Education Statistics 2000). This data stresses the disparity that persists throughout public education. For example, when tested at three levels of reading competencies, gaps of 20 to 30 percentage points were found between nine year old white and black or Hispanic children. Such gaps continue as children are tested at 13 and 17 years old. Students who fall behind early in their basic skills development both have trouble catching up and keeping up with new learning of their peers. The reasons often cited for this underperformance include the effects of economic deprivation (Esposito 1999, Herbert and Reis 1999, Kao and Tienda 1998, Pungello, et al. 1996, Bowman 1994), the likelihood of growing up in a single-parent family and the related issue of low education level of the mother (Gordon 2000), lack of family experience with educational success and lack of available role-models (Shure 2001, Hudley 1992) and lack of parental and community support for academic attainment (Smith and Hausafus 1998, Fisher and Griggs 1995). These students often respond to their lack of achievement with boredom, frustration, and a sense of alienation from school that shows itself in violence, gang membership, absenteeism, tardiness, and a general lack of preparedness (Brophy & Good 1986, Slavin, Karweit & Madden, 1989, http://www.ets.org/ research/pic/facingfacts.pdf). Instructional strategies are frequently suggested and include direct instruction, peer and cooperative learning strategies, and frequent feedback and reward. The other set of influences on academic achievement frequently cited are family and peer encouragement, and opportunity orientation. Family influences can be harnessed by providing information on how the education system operates, access to educational support resources, eliminating barriers to success and learning, and exposure to positive opportunities (Duffus & Isaacs 1989, Brown & Mann, 1990). A complex relationship between social development, peer influences, and aspiration levels exists among adolescents. Students often respond to their alienation by creating their own alternate culture or taking advantage of those already there, as in the case of gangs to which minority students are especially vulnerable (http://www.ets.org /research/ pic/facingfacts.pdf). Social interventions designed to create positive environments in which to find positive peer influences can, thus, affect aspiration levels (Weiner 1990, Goodenow & Grady 1993). Visible role models among adults and peers are important to the definition of aspiration levels and self-concept elements, since they provide students with opportunities to connect and interact. Peer role models need to be created with leadership skills and focus on their positive attributes (Stokes, et. al. 1988). Much of this activity is based on the expectancy and efficacy theories established by Bandura (1997), who proposes that individuals must be able to reproduce behavior they have observed and have the confidence that they can be successful when they act. Public education involves multiple categories of consumers. Assessment of the diverse needs of these consumers, a) enable the system to identify common and differential concerns, b) better target key consumer groups c) develop appropriate promotional appeals and strategy alternatives, and d) develop solutions strategies which reflect community consensus. Careful identification of consumer needs is the logical, first step in the marketing process. Table 1 identifies the multiple audiences and their different and overlapping needs. Having a supporting learning environment where students achieve academically is an intersecting need among all consumer groups. Significant among the groups is elimination of barriers to learning and success.
Auditing Expectations Gap: A Possible Solution
Auditing expectation gaps have been identified in several countries. This research investigates the expectation gap in Oman. As in other counties, an auditing expectation gap is found in Oman also. The study looks at education as a way of reducing this gap and proposes that discussion in the introductory accounting texts would reduces this gap. Auditing expectation gaps have been identified in several countries. The expectation gap can be defined as “the difference between what the public and financial statement users believe that auditors are responsible for and what the auditors themselves believe their responsibilities are” (AICPA, 1993). In the United States this expectation gap was identified in Cohen Commission’s report, which showed that the public expectation of auditors’ responsibilities was more than what they were receiving (AICPA Cohen Commission Report 1978). Researchers in several countries have demonstrated that an auditing expectations gap does exist and that the expectation gap is not limited to USA. In this study the Sultanate of Oman is investigated to see if an audit expectation is found and can education in the form of an auditing course reduce such a gap in expectations. The remainder of the paper is outlined as follows: A literature review followed by the survey instrument used to obtain evidence. The subsequent sections presents the analysis of the responses and the final section presents a solution, the limitations and directions of future research. In the United States the auditing expectation gap was formally recognized and described in the 1978 Cohen Commission Report. Auditing Standards Board of AICPA issued auditing Standards in 1988 to reduce the expectation gap. Statement of Auditing Standard number 58 “Reports on Audited Financial Statements” requires a new standard audit report. This audit report includes an explicit statement that an audit provides reasonable assurance for reliance on the fairness of financial statements. SAS 53 requires auditors to design an audit to provide reasonable assurance that all material misstatements will be detected. These SASs convey the concept of reasonable assurance and not absolute assurance regarding the assurance provided to users. Epstein and Greiger (1994) conducted a survey of investors regarding auditors’ responsibility to detect material misstatements as a result of error (unintentional misstatement) and as a result of fraud (intentional misstatement). They found that for errors 51% of investors required reasonable assurance and that 47% required absolute assurance. For fraud 70% of the investors required absolute assurance. The levels of absolute assurance were much higher than AICPA had anticipated after the issuance of the new SASs. Epstein and Greiger suggest that educating the public could lead to a reduction in this expectation gap. Lowe (1994) studied the audit expectation of judges and auditors and found a large divergence in the perceptions of the judges and auditors. Lowe and Pany (1993) compared the views of potential jurors with the views of auditors on knowledge about auditing, the auditor’s role and general attitudes about the profession. They found that the expectations of the potential jurors and auditors differed substantially, exhibiting an expectation gap. Gramling, Schatzberg and Wallace (1996) studied the perceptions of students (as a proxy of informed users) and auditors and found significant differences. This expectation gap phenomenon has also been observed in other countries. There are several studies on the expectation gap by researchers from Europe, Australia, New Zealand and Canada. Monroe and Woodliff (1996) studied the perception of auditors, accountants, directors, creditors, shareholders and undergraduate students. They found a gap between the auditors and the various user groups. A consistent theme running through many of the research papers is that education could reduce the expectation gap. “Could an undergraduate auditing course have an effect on the perceptions of auditing students?” was a question asked in the Gramling, Schatzberg and Wallace (1996) Study. This study is a partial replication and extension of the Gramling et. al. study. The survey instrument used was the same instrument used by Gramling et al. This instrument was a modified version of the instrument used by Humphrey et al (1993) study of auditing expectation gap in Britain. The survey instrument used in this study consists of four primary sections. 1) questions to elicit opinions about auditors and the auditing process; 2) auditors’ role with respect to audited financial statements; 3) auditors’ role with respect to audited company; and 4) auditors responsibility to owners and creditors. Although other information was also collected, that information is not part of this study. The respondents were asked to indicate their agreement or disagreement to the statements in the questionnaire using a 7-point Likert scale (1=Strongly disagree; 4=Neutral; 7 = Strongly agree).
Ethical Attitudes Among Accounting Majors: An Empirical Study
Dr. Siva Sankaran, California State University, Northridge, CA
Dr. Tung Bui, University of Hawaii, Manoa, HI
Due to innumerable instances of ethical lapses reported in the media recently, the accounting profession has come under close scrutiny. This study investigates if there are linkages between background characteristics and ethics among individuals who are on the verge of joining the accounting profession. An instrument is developed to measure ethical attitudes and administered to a sample population of college students majoring in accounting. Results show that i) ethics is inversely related to individual competitiveness, ii) personality types have no bearing on ethics, iii) ethics diminishes with age, and iv) women have higher ethics. The study also compares the ethics level of accounting majors with those in other business and non-business majors. Time has repeatedly proven that to err is human. Circumstances can sweep away even the best person’s ethical principles. With the ever widening globalization and cut-throat competition, the pressure in the workplace is higher than ever before. With the desire to produce results, the modern worker can easily be tempted to compromise on ethical principles. Unfortunately, some fall victim to temptation more easily than others. It is ironic that most ethical violations are committed by the advantaged and successful professionals. Among the business professionals, the accountants have come under greater scrutiny of their ethical values and practices due to recent disclosures in the media. In some instances, the ethical lapses on the part of even a handful of employees have brought their firms to extinction. This is why it has become important for companies to take a proactive stance in building an ethical corporate culture. This can be done only if the company has a team of employees committed to moral principles and the company provides an environment to nurture it. Often, companies place undue emphasis on employee/consultant skill sets and relegate other personal characteristics such as ethics to the background. Assuming a company truly wants to evaluate the ethical attitudes of a prospective or a current employee/consultant just as well as the skill set, what signals should they looking for? What specific individual background and contextual factors are related to the ethical compass of a person? In what manner and to what degree these factors impact on the level of ethics? Which employees are most likely to commit ethical lapses? These are the questions this research aims to answer. Research in the field of ethics as related to business professions needs to be expanded especially in the face of the recent accounting scandals (Lindsay, 2002; Wood, 2002). Currently no unified model/framework exists capturing all possible factors influencing ethics in business environments. In our investigation, the factors that showed promise are: individual competitiveness, personality type, age, gender and nature of profession. Individual competitiveness is the personal desire to outperform a rival employee or company (Maramark and Maline, 1993). Modern work environment places considerable demands on the employees. With the globalization of the marketplace and free flow of goods among the international communities, the competition among the corporations for market share has become fierce. While healthy competitiveness is an asset for an individual in the company, undue desire to outperform can lead to false claims or deliberate concealment of facts. This can result is ethical compromises. For instance, when making a sale an employee may extol the virtues of a product and intentionally hide its shortcomings. It has become commonplace to advertise products and services offering attractive prices with restrictive clauses in fine print hoping the customers will not notice them. However, ethical lapses can cause large financial losses due to resulting lawsuits and bankruptcies. A real example is the tobacco industry which, in spite of evidence to the contrary, maintained that nicotine is not addictive. The recent case at Enron is another example. In this case, losses were purposely hidden by high-level certified accountants and on revelation the documents were systematically shredded. It illustrates that in the conflict of interest between personal enrichment and duty to protect shareholders, greed wins handily. Ethical attributes in a person may be impacted by personality type (Barger, et al., 1998). There are several examples in various industries that support the possible linkage between personality types and ethical conduct (Tieger and Barron, 1993). Two types of personalities are discussed in literature: Type A and Type B (Rowe, 1992; van Aken et al., 1998). Type A behavior consists of several characteristics: always being in a hurry, easily moved to hostility and anger, and high levels of ambition (Friedman and Roseman, 1974). They are aggressive, task oriented and time driven. On the other hand, Type B personalities are more low key, cooperative and patient. Because of their predisposition to win at any cost, we expect that Type A individuals will tend to compromise on ethics more readily. A study by Coombe and Newman (1997) reported that younger individuals tend to be less concerned with ethical considerations. In determining response to social interactions, they tend to have their own code of ethics and formulate own moral and ethical stances. As individuals grow older, they become more philosophical and moralistic (Auerbach and Welsh, 1994; Barger et al., 1998). Thus, an older individual is more likely to have a higher level of ethics. Men appear to have lower ethics standards compared to women (Kelly, 1990). In a study conducted by Petty and Hill (1994), the researchers administered the Occupational Work Ethic Inventory to 2279 workers. Women scored significantly higher on ethics compared to men. Thus, past research seems to indicate that women will form a more ethical workforce. There are good reasons to postulate that ethical values will differ across professions due to their intrinsic nature and the type of activities they entail. This is because individuals can be generally expected to match their values to the profession they aspire to work in. Someone who wishes to major in nursing or social work will likely have a high level of altruism. This is because of personal attributes and education which encourage high ethical conduct. On other hand, someone majoring in a business field is trained to make decisions based on optimizations of economic rewards. Their emphasis is on the bottom line and they may be willing to manipulate the market for the sake of higher profits. Ethical considerations may take the back seat.
Recent Pattern of the U.S. Gender Occupational Segregation and Earnings Gap
Using Current Population Survey (CPS) and Census Public Use Microdata Sample (PUMS), this paper makes a descriptive inquiry into the changes of gender occupational segregation and the earnings gap in the U.S. labor market during the 1990s. This paper found that throughout the decade, including a brief recession in the early 1990s, there has been an upward mobility in the occupational distribution. More specifically, the occupational distribution has been fairly stable with a slight but consistent increase in the relatively prestigious occupational categories, and a modest but sustained decrease in the relatively less prestigious occupational categories. This finding suggests that the more symmetric occupational distribution between male and female workers, along with upward mobility of female workers, will continue to drive the gain in female workers’ earnings - possibly resulting in the narrower gender earnings gap in the future. In the 1990s, the U.S. economy witnessed the longest economic expansion in its history. As jobs are created and destroyed simultaneously in booms and recessions at significant rates, the U.S. economy has experienced several major changes that would influence the occupational structure and earnings in the 1990s. Among the major changes are the decline in the defense industry following the end of the Cold War, an extraordinary amount of corporate restructuring, the emergence of a truly global economy, the deindustrialization,1 and the change in demographic composition as a result of continuing influx of immigrants. One of the most important developments in the U.S. labor market in the last decade was the increase in the number of women, especially married women, at work for pay. With the majority of women now participating in the labor force, a great deal of attention has focused on women’s earnings and employment. This attention is reflected in numerous policies initiated since the 1960s designed to raise female earnings and employment opportunities. The rationale for these policies is to counteract the effect of discrimination and, according to some perspectives, to reduce inequalities in labor market outcomes even if they do not result from discrimination. The efficacy of these policy initiatives has been under considerable scrutiny in part because of the persistence of the overall gender earnings gap. This has spawned considerable debate on the extent to which the gender earnings gap reflects discrimination and the extent to which the gap has been affected by various policies (Gunderson 1989). Occupational segregation in the U.S. labor market had been fairly stable until 1960s (Terrell, 1992). Declines in the occupational segregation began to take place during the 1970s and continued to persist throughout the 1980s, albeit at a slower rate (Cotter et al., 1995; Wells, 1999). It has been well recognized that the occupational segregation explains a major portion of the gender earnings gap. In addition to this, there are several reasons to be concerned with occupational segregation. From the perspective of macroeconomy, Anker (1998) viewed that the occupational segregation is a major source of labor market rigidity and economic inefficiency. When equally qualified workers are excluded from the majority of occupations, the society fails to utilize valuable human resources efficiently, consequently resulting in the labor market inflexibility. Furthermore, prolonged labor market inflexibility will ultimately lead to a reduction in an economy’s ability to adjust to structural changes. Given that the current world economy is closely intertwined with the globalization of production and increasing competition among countries, the flexibility of the labor market assumes even greater importance than ever. On the microeconomic side, occupational segregation by sex can be clearly detrimental to female workers. To the extent that female workers are discriminated against and forced to engage in lower-level occupations, the prolonged and persistent occupational segregation will continue to discourage female workers from investing in human capital, such as education and training. In the absence of segregation, they would otherwise have greater access to these resources. This consequently negatively influences female workers’ labor market positions, including income and poverty status, potentially causing labor market inequalities to be perpetuated into future generations.2 Although a great deal of academic effort, such as human capital theory, taste-based discrimination, statistical discrimination and crowding hypothesis, has been focused on identifying the source of the occupational segregation, basic gaps still remain in our understanding of how the differences in occupational structure have evolved. Specifically, absent has been the investigation of how the earnings gap and the occupational structure are related (Gullason, 2000). While the issue of occupational segregation is important in its own right, it assumes more importance because of its connection with income inequality. Income inequality rose extraordinarily in both the 1980s and the 1990s in the U.S. economy.3 The income inequality in the 1990s became even more marked in such a way that the earnings growth has been greater for those higher up in the income ladder (Frank, 2000). In the United States, specifically, between 1979 and 1997, the average income of the richest fifth of the population jumped from nine times the income of the poorest fifth to around 15 times (The Economist, June 16, 2001 p. 9). This is in parallel with British experience in which income inequality reached its widest level in 40 years in 1999. During economic expansions, the scarcity of labor resources enables workers to move from low-wage industries to high-wage industries. The movement of workers across labor sectors in search of employment with better conditions not only influences workers’ earnings but also affects occupational structure. Likewise, in economic booms, workers who were unemployed or out of the labor force stand better chances of finding suitable employment. If the new entrants were more likely to be female workers and find employment in the lower-level occupations with lower wages, the occupational segregation and earnings ratio between male and female workers would reflect the change. With this background, this paper addresses the following questions. First, what happened to the occupational structure and gender earnings gap in the U.S. labor market in the 1990s? Secondly, how do we interpret the findings from the data analysis and what are the implications of the findings?
Herb de Vries, Christchurch College of Education, Christchurch, New Zealand
Jennifer Margaret, Christchurch College of Education, Christchurch, New Zealand
New Zealand’s small to medium-size businesses (SMEs) encompass 99% of all NZ businesses and employ over 60% of the country’s workers. Businesses with five or fewer employees alone account for a quarter of the country’s workers and constitute just under 85% of all businesses (Ministry of Economic Development, 2001). The significance that SMEs hold for our country’s economy is obvious. The implication of these facts for those of us who provide business-related courses is that we need to deliver content in our courses that supports not only the need of our larger businesses for functional and strategic specialists but also the need of smaller businesses for general strategic management capability. This article is the product of a wider study on the strategic management capability of SMEs and the part that business courses can play in supporting efforts to enhance that capability. An essential part of this study has been the formulation of a model against which the strategic management capability of an organisation can be assessed. This work also has required the development of an instrument that would allow us to gather data that could be plotted on that model. The article describes the development of both the model and the data collection instrument and also reports on a study designed to pilot the instrument. The pilot was conducted with a specific NZ industry that is dominated by small- to medium-sized businesses, namely the NZ Furniture Industry. We begin this article by defining small businesses. From there we briefly examine the notion of strategic management capability and its particular relevance to the development of the model and the questionnaire, both of which we describe in detail. Then, before documenting the specifics of our pilot study, we provide some background information on the NZ Furniture Industry. The pilot study shows not only the instrument in ‘action’ but also reveals how data collected through its use can be plotted on the model. Finally, we briefly illustrate how information derived by way of the model can be used as a point of comparison with other findings regarding the strategic management capability of organisations and outline the implications of such findings for the delivery of business-related courses. Cameron and Massey (1999), writing from a NZ perspective, define a small business as one that has 6-49 employees, and a medium-sized business as one that has 50-99 employees. As reference points, they define a micro-business as having five or fewer employees, and a large business as having 100 or more employees. These numbers reflect NZ’s population size; the most common definition of a SME in OECD countries is a firm with fewer than 500 employees (OECD, 1997). In international terms NZ’s definition of SMEs as enterprises with between 6-99 employees is at the lower end of the range. Care therefore is needed when comparing SMEs across countries. Perry and Pendleton (1990, p2) contend that SME o/ms (o/ms) must not only know their trade but also exhibit business management skills. However, as Hamilton and English (1997, p7) point out, SME founders tend to be entrepreneurs rather than managers by nature. As such, they are usually the best people to start a business but often the worst at managing it. Gerber (1995) goes one step further when he suggests that SME o/ms often engage in work within their own business that does not suit them, and that they operate their businesses according to what they ‘want’ as opposed to what their businesses ‘need’ (p34). This criticism has particular relevance when it is realised that our continually changing trading environment requires SMEs to respond to market needs rather than o/ms’ ‘whims’. Gerber’s work, along with that of others (Ashton, 1992; Hamilton & English, 1997; Sibbald et al., 1994), indicates that SME o/ms do not readily appreciate this requirement and so have difficulty formulating and implementing strategies that allow their businesses to respond successfully to market demands. Sibbald et al. (1994, p89) claim that SMEs have the advantage over larger companies of being able to adjust quickly to changing market conditions. However, they also say that SMEs are unlikely to realise this advantage unless they have in place quality management systems (that is, strategic management capability). The importance that effective management holds for a SME in terms of its strategic capabilities cannot therefore be over-emphasised. Sibbald et al. (1994) also suggest that because of the nature of SMEs, many o/ms of these organisations lack the skills to bring a cohesive management approach to strategic planning and implementation. It was this criticism that led to the development of the first part of the strategic management capability model (Figure 1). The model, in identifying how SME o/ms can bring cohesiveness to their strategic management activities, also identifies the elements that make up strategic management capability. The model draws on Sibbald et al.’s (1994) general planning model, Longman’s management actions model (cited in Johnson & Scholes, 1993), and Turner and Mill’s (1994) strategic planning process. It demonstrates that, to be effective, o/ms must first assess the needs of their market, the context in which that market operates, and the ability of their businesses to meet those market needs. Having determined what is needed, they must then facilitate, through strong leadership, the formulation and implementation of strategies that will allow their businesses to meet any shortfall in needs. They must also, during both the formulation and implementation phases, effectively manage and monitor people, policies, procedures, projects and plans. These steps, in enabling o/ms to apply a cohesive, coherent approach to their strategic management, ultimately bring about organisational effectiveness and efficiency. A review of further literature pertaining to the elements contained in Figure 1, that is, environmental awareness leadership, management, and organisational performance led to the construction of the second stage of the model (Figure 2). This second stage allows us to plot, using data obtained from the aforementioned instrument, the particular management-related practices or variables pertaining to each of these elements as exhibited by an organisation. In so doing, we can then identify an organisation’s ‘stance’ on each element and gain an overall profile of the organisation’s strategic management capability. Some researchers (eg, McGregor, 1967; Peters & Waterman, 1982) contend that models such as ours tend to over-simplify organisational situations because they do not take into account the unique nature of individual organisations.
Classroom Management of Project Management: A Review of Approaches to Managing a Student’s Information System Project Development
Nicky Ellen, Christchurch College of Education, Christchurch, New Zealand
John West, Christchurch College of Education, Christchurch, New Zealand
What are the most effective ways of delivering an applied course in the application of the Systems Development Life Cycle that gives students a taste of real-life project management issues whilst maintaining assessment integrity and lecturer sanity? This paper discusses different approaches, used by two lecturers teaching Systems Analysis and Design in the School of Business at the Christchurch College of Education, for managing a real-life project and student group work and assessment. The School of Business at the Christchurch College of Education provides qualifications that span the New Zealand education framework, including the provision of the nationally awarded New Zealand Diploma in Business and the Bachelor of Business Management, a jointly conferred degree with Griffith University, Brisbane, Australia. Within these two qualifications there is a common Systems Analysis and Design course. In the New Zealand Diploma in Business it is a second year optional course - DB252 Systems Development Project. In the Bachelor of Business Management it is a core course in the Information Systems major – MGT2006 Information Systems Analysis. Its focus is on education of students in the effective development of a computerised system for the management of information within a business. This involves all the steps in the development of a Management Information System, from initial identification of the problem, analysis of the requirements for the new information system, design and development of the new system, through to its implementation into the organisation. The methodology used, which mirrors these stages, is known as the Systems Development Life Cycle (SDLC), which is a traditional methodology recognised by the IT industry. The SDLC takes a sequential approach to information systems development, with a heavy emphasis on front-end analysis of both the current situation and its inherent problems, as well as the perceived new information system. This analysis focuses on gathering and fully documenting the requirements of all users of the system. The documentation typically includes narrative reporting from observations, interviews or surveys, as well as diagramming techniques used to depict the existing and new systems. In the early stages of this course, students investigate aspects of project management as well as the role of a Systems Analyst and then take on that mantel for the remainder of the course. Student groups are formed, comprising three or four people, and these groups become the project development team. The learning progresses through each phase of the SDLC with a mix of theoretical and practical sessions, so that by the end of the course, students have produced a fully documented working model of a Microsoft Access relational database which solves a “real life” information management problem. Each phase of the SDLC has key deliverables which are typically signed off by the client before the next phase is commenced. In order to keep the course content as close to real life as possible, these deliverables are also expected from the student groups during the course, and are used for assessment purposes. The final product, however, is developed individually. We have both been involved with the delivery of this course over the past few years. We share a concern to keep the content of this course as applied and "real" as possible and to ensure students gain a good understanding of all steps in the process of effective information systems development. To this end, we have used different methods to manage a real-life project and to manage group work and assessment. This paper compares our two approaches. The literature suggests many good reasons for the inclusion of real-life case study analysis, including the desire to improve project management. This is particularly pertinent when one considers the poor success rate of IT projects; the Standish Group's Chaos Report states that only 28% of IT projects are viewed as successful, with about half of all projects coming in "over budget, late or missing some intended functions" (Wilbur, 2001, p.28). The need to generate and therefore experience the unexpected during a project's development is critical for students, in an attempt to broaden the scope and perspective a potential analyst would have of a situation. Further benefits can be gained by the use of real life case studies, as described by Grupe & Jay (2000) who state that real life case studies also require students to bring in ideas from a variety of courses, encourage participation, debate, participation in substantive discussion, and therefore better understanding. Given the desire for the use of real case studies, just what is the up-take? A 1996 study by McLeod comparing undergraduate courses in Systems Analysis and Design courses in North American colleges and universities, found from 647 useable responses, 393 institutions used a project involving an artificial scenario, 335 used a project involving a real situation, with 81 institutions utilising a combination of both. It is surprising, given the desire to provide grounded theory in training IT professionals, that more institutions do not use this approach. Gibson, O’Reilly & Hughes (2002), in their discussion of the impact of ICT on developed societies and the importance of exposing students to up-to-date usage of computer technologies, confirm that it is imperative that students must learn what is relevant to the environment in which they will be working. In going on to describe teaching approaches which facilitate the gaining of practical experience, Gibson et al (2002) describe the taking of existing lecture notes and converting them to static documents, often displayed on the web, as having “serious pedagogical shortcomings” (p.21). They compare this traditional approach to learning with a real-life project-based experience which they see as providing a greater understanding for students, not only in the use of ICT but also in terms of gaining industrial experience. The literature also provides an interesting commentary on group work and its assessment. Weistroffer & Roland-Gasen, in their assessment of IS courses, surveyed employers on the important knowledge and skill areas of an IT graduate. One of the key skill areas identified was “team membership and liaison skills with co-workers, management, and customers” (1995, p.6, Table 2), particularly when one considers the amount of team work undertaken by workers at all levels of the hierarchy, with managers, on average, serving on three teams at any one time (quoted in Chapman & Van Auken, 2001).
Cheating – What is It and Why do It: A Study in New Zealand Tertiary Institutions of the Perceptions and Justifications for Academic Dishonesty
Kelly de Lambert, Christchurch College of Education, Christchurch, New Zealand
Nicky Ellen, Christchurch College of Education, Christchurch, New Zealand
Louise Taylor, Christchurch College of Education, Christchurch, New Zealand
How do different groups of people perceive academic dishonesty and what are the reasons they give for undertaking academically dishonest acts? It is posited that students and lecturing staff have different perceptions of what constitutes academic dishonesty and the seriousness of the acts. This paper also presents the findings of investigations into the reasons given for academic dishonesty and rates of prevalence within New Zealand tertiary institutions. In this age of increased pressure for academic success and the endeavour for higher qualifications, academic dishonesty has become a much-discussed subject amongst tertiary teaching staff. Not only is it of interest to lecturing staff in tertiary institutions, but also to employers of graduates, and to students who may view the exploits of their academically dishonest peers as injurious to their own hard-earned success. The research undertaken draws on literature and other published studies, but is primarily concerned with the tertiary sector in New Zealand, specifically universities and polytechnics, and encompasses a variety of academic disciplines. Although there is a comprehensive set of studies available from overseas, there is very little material available in the New Zealand context. To this end, an independent study was carried out with New Zealand tertiary institutions, students and academic staff. Areas investigated include prevalence, perceptions, justifications, action and non-action, penalties, policy and prevention. The intention of this paper is to present the findings from three of these areas - prevalence, perceptions and justifications. We have interpreted the term academic honesty to describe the submission of work for assessment that has been produced legitimately by the student who will be awarded the grade, and which demonstrates the student's knowledge and understanding of the content or processes being assessed. Evidence to support the student's work can, and should, be provided by referencing legitimate work of others, as long as it is appropriately acknowledged. Therefore, academic dishonesty, referred to throughout this paper, includes any behaviour that breaches this. As the study was investigative in nature, statistics presented in this paper are descriptive rather than inferential. A literature search was undertaken from which a number of major studies in the area of academic dishonesty at tertiary level were identified. A variety of techniques were used for data collection in these studies, including questionnaires and interviews, both structured and unstructured. Several studies report on the prevalence of academic dishonesty at tertiary level, indicating a range of 67%-86% of students involved in acts of dishonest practice and a greater number of male students than female students acting dishonestly (Roig & Ballew, 1994; Davis, Grover, Becker & McGregor, 1992; Payne & Nantz, 1994). Literature on comparisons of prevalence between academic disciplines and ethnic groups is sparse. However, one study of United States university students versus Central European (Polish) students concluded that a much greater proportion of Polish students had been involved in some form of cheating (84%) during their tertiary career than their counterparts from the States (55%) (Lupton, Chapman, & Weiss, 2000). Studies into perceptions held by students and academic staff include a variety of methodologies from the ranking of dishonest acts through to indicating appropriate penalties based on views of seriousness. Findings from these studies indicate that students tend to view any form of exam-related cheating as far more serious than any dishonest acts performed whilst completing formative, unsupervised assessment, and further that students. are far more tolerant of academic dishonesty than academic staff. Business students rank the highest for their tolerant views while Social Science students are the most condemning of academic dishonesty. It is evident that female students are far less tolerant of academic dishonesty than their male counterparts. There is little in the literature to indicate any significant difference in attitudes between ethnic groups, apart from the Polish:US study which shows that Polish students have a much more tolerant view of academic dishonesty than US students (Roig & Ballew, 1994; Roberts & Toombs, 1993; Lupton et al, 2000; Ashworth & Bannister, 1997). Findings from several researchers indicate the most common justifications given for acting dishonestly include pressure of time and the desire for a better grade (Roig & Ballew, 1994; Payne & Nantz, 1994; Franklyn-Stokes & Newstead, 1995). Three questionnaires were developed and administered to academic institutions, teaching staff and students as described below: Questionnaires were sent to the Registrars of all New Zealand polytechnics and universities (22). The purpose of the questionnaire was to determine the number of equivalent full-time students (EFTS) attending, the number of documented cases of dishonest practice and the sanctions imposed. The 14 institutions whose responses were received host 194,594 of New Zealand’s 282,808 tertiary students. With an average EFTS population of 6,086, the responding institutions were at the larger end of the scale, educating 1,000-15,000 students each. Questionnaires were posted to 350 lecturing staff, representing a range of disciplines in polytechnics and universities throughout New Zealand. Responses were received from 96 university staff and 17 polytechnic staff.
Leadership Style and its Relationship to Individual Differences in Personality, Moral Orientation and Ethical Judgment – A Ph.D. Proposal
Jennifer Margaret, Christchurch College of Education, Christchurch, New Zealand
The goal of this study is to search for different ethical judgements of different groups of managerial professional participants and to see if these judgements vary according to their type of leadership style, personality and personal moral orientation. The participants are New Zealand and Australian managerial professionals from these countries education and business sectors. The central themes are: (1) At the behavioural level - Linking the most recent leadership theory ( to the notions of: organisational virtues; the applied ethics notion of moral intensity and; the moral psychology notion of personal moral orientation. (2) At the mental representation level - Exploring the underlying mechanisms of mental representation of leaders' moral orientation and possible consequences of differential covert encoding for: ethical decision processes and; leadership behaviour. The following sections address these themes separately, discussing the theoretical rationale and previous empirical research concerning the proposed relationships, briefly defining each variable in the proposed research and, concludes with hypotheses and methodology for each. Latest emerging themes in Organisational Psychology literature concern the notions of ethics and justice. "Workplace justice, a long-standing topic in organisational research, is an increasing concern..." (Rosseau, 1997). Rosseau (1997) has reviewed all the key areas of organisational psychology and concludes that each of these areas has either become a justice issue or has an important ethics or justice component to it. Organisational change is one such area in which Rousseau notes the perceived fairness of outcomes (distributive justice); the communication process in managing change (interactional justice); and the processes whereby implementation decisions are made (procedural justice), all have an influence to varying degrees on employees perceptions of workplace justice. The critical point that Rousseau (1997) highlights regarding the employment relationship, is the centrality of the issue of trust and its dependence on employees perceptions of workplace justice: "Awareness has increased regarding the importance of trust in the employment relationship...organisational citizenship is a correlate and possible outcome of trust which has been found to be influenced by perceptions of procedural fairness" (Rousseau, 1997). Because of the importance of the notion of justice in all areas of work psychology this proposal concerns leadership from a justice and ethics perspective. Theme 1: Relating leadership theory to: the notion of organisational virtues; the applied ethics notion of moral intensity and; the moral psychology notion of personal moral orientation. Transformational and Transactional Leadership: The latest leadership model is Bass's (1994) Transformational and Transactional leadership. Transformational and transactional leadership originates in the sphere of political analysis, Bass took these concepts and applied them more generally to supervisor-subordinate relations. Bass finds that transactional leaders have a cost-benefit orientation towards leadership whereby they concentrate on rewarding effort appropriately and ensuring that behaviour conforms to expectations. In the process, they "concentrate on compromise, intrigue, and control" (Bass, 1994). Whilst transactional leadership is likely to be conservative, transformational leadership is either revolutionary or reactionary. Transformational leaders are "charismatic, inspirational, visionary, intellectually stimulating and considerate of individual needs. They encourage followers to find novel solutions to problems and delegate, coach, advise and provide feedback" (Bass 1994). The efficacy of transformational leadership is well documented. Recent empirical studies look at the relationship of transformational leadership and: follower's cultural orientation and work performance (eg., Jung & Avolio, 1999); personality orientation (eg., Church & Waclwski, 1998; Valaint & Loring, 1998); group process and ethical decision making (eg., Schminke & Wells, 1999); gender differences (eg., Bass & Avolio, 1996); followers innovative behaviours and quality of leader member exchange (eg., Basu & Green, 1997; Gerstner & Day, 1997). Further studies show that transformational leadership correlates positively and strongly with work performance (eg., Ross & Offermann, 1997); group cohesion (eg., Sosik, Avolio & Kahai, 1997); attitude towards work (eg., Kirkpatrick & Locke, 1996); and also follower satisfaction and perceived leader effectiveness (eg., Parry, 1994). Finally, transformational leadership is shown to augment the success of transactional leadership (eg., Parry, 1996). In other words, the degree of leader success in improving outcome variables is improved by displaying transformational leadership in addition to transactional leadership. After three decades of leadership research the latest trend is toward virtue leadership. This is evident in the recent upsurge of discussion on moral or ethical leadership (eg., Singer, 1996; Smith, 1995). Central to this ethical leadership literature is the pivotal emphasis on the role of Aristotelian virtues in leadership, although most theorists do not refer directly to Aristotle's classical works. The idea that a harmonious balance of virtues leads to a good life is at the core of Aristotelian ethics. Aristotle identified these traits among the basic moral virtues: Justice, Courage, Temperance, Liberality, Honour, Congeniality, and Truthfulness.
Debt and Taxes, and Tax Deferral
Dr. Terrance Jalbert, University of Hawaii at Hilo, Hilo, HI
Dr. Jeffrey Decker, University of Hawaii at Hilo, Hilo, HI
In this paper we examine the conventional wisdom that ten years of tax deferral is almost as good as exemption. Examining a corporation that invests in a single risk free bond we demonstrate that the conventional wisdom regarding tax deferral does not hold. We go on to demonstrate that deferral is not as good as exemption even when the deferral time is extended to 20 or 30 years. Based on these findings we argue that the equilibrium quantity of bonds outstanding in the economy will be higher than that suggested by Miller (1977). This work has important implications for personal and corporate investment decisions, capital structure analysis as well as empirical studies that rely on work of Miller (1977) to identify marginal tax rates. One of the seminal articles in finance is the 1977 Debt and Taxes article of Merton Miller. In his article, Miller examines the optimal capital structure of the firm. He concludes while there may be an optimal capital structure for the economy as a whole, there is not an optimal capital structure for any individual firm. He arrives at this conclusion, in part, by arguing that because of tax deferral, the effective personal tax rates on income from stocks is 0%. He makes his argument for a 0% effective personal tax rate in large part based on the argument that ten years of tax deferral is almost as good as exemption. Millers argument has relevance not only in the capital structure literature, but in other literature streams as well. One example is the literature related to the Darby Effect. In much of the literature concerning the Darby Effect the corporate tax rate is used as the marginal investors tax rate based on the analysis of Miller (1977). A specific example is Jaffe (1985), who examines the Darby Hypothesis using a technique based on Miller (1977). In addition to the Miller based analysis, Jaffee (1985) augments Miller's suggested corporate tax rate by including personal taxation on equity income. He finds that the incorporation of personal taxes increases the responsiveness of interest rates to changes in the rate of inflation. This finding suggests a possible flaw in Miller's analysis. If Miller was correct in his contention that deferment of personal taxes for 10 years is almost as good as exemption, and corporations attempt to avoid these personal taxes, there should be no difference in the analysis. In this paper, the folk wisdom that ten years of deferral is almost as good as exemption is examined. Modigliani and Miller (1958) demonstrate that, in a world without taxes, the value of a firm subject to the double taxation system is independent of its capital structure. As such, the value of the levered firm (a firm with debt in its capital structure) will be the same as an otherwise identical unleveraged firm (a firm without debt in its capital structure). Their theory is based on an arbitrage argument that demonstrates how any gain from leverage will be arbitraged away. In their subsequent paper, Modigliani and Miller (1963) relax the assumption of no taxes. In this paper entity-level taxes (but not personal taxes) are incorporated into the analysis. When entity-level taxes are included in the analysis it is shown that the value of the firm is not independent of the method of financing. Rather, the value of the firm is increased through the use of debt in the capital structure, because of the tax deductibility of interest payments on debt. Miller (1977) argues that the gain from leverage may be smaller than what was suggested in Modigliani and Miller (1963). Miller incorporates both entity level and personal taxes into his model. Where TPS is the personal tax rate on income from stocks, TPB is the personal tax rate on income from bonds, TC is the entity level tax rate, E(NOI) is the expected net operating income of the firm, r is the discount rate for an all equity firms of equivalent risk, Kb is the risk free rate of interest, Kd is the interest rate on debt, and D is the book value of the debt outstanding. Taking the difference between the value of the levered firm (Eq. 2) and unleveraged firm (Eq. 1) results in the increase in value of the firm associated with using debt in the capital structure as follows: Based on equation 5, Miller argues that the gain from leverage is less than what was previously thought, and can even turn negative. Miller (1977) argues that the personal tax rate on income from stocks will effectively equal zero reducing or eliminating any gain from leverage. The intuition behind this argument is that a shareholder will never pay personal taxes as long as the corporation does not pay a dividend or the investor does not sell the stock. Miller (1977) argues that while there is not an optimal capital structure for any individual firm because of clientele effects, there is an optimal capital structure for firms as a whole. Miller (1977) goes on to argue that there will be an equilibrium amount of bonds outstanding in the economy. This equilibrium occurs as is pictured in Figure 1. The equilibrium is based on the relative yields of taxable versus tax exempt bonds. Miller (1977) argues that the yield on taxable bonds must be grossed up by the corporate tax rate relative to the yield on tax-exempt bonds in order to arrive at an equilibrium quantity of bonds outstanding. This relationship is displayed in Figure 1. The demand curve for bonds, D is given by
Empirical Analysis of Determinants of Geographic Differentials in the Bank Failure Rate in the U.S.: A Heteroskedastic-Tobit Estimation
Dr. Richard J. Cebula, Armstrong Atlantic State University, Savannah, GA
Dr. Richard D. McGrath, Armstrong Atlantic State University, Savannah, GA
William O. Perry, Armstrong Atlantic State University, Savannah, GA
Using the Heteroskedastic-TOBIT model to deal with both censored data and a heteroskedasticity problem, this study address determinants of interstate differentials in bank closing rates over the 1982-91 period. It is found that the bank closing rate in a state is an increasing function of the cost of deposits, the percentage of state employment derived from oil and natural gas extraction, and the existence of unit banking regulations, while being a decreasing function of housing price inflation in the state, the average percentage growth rate of the GSP in the state, and the percentage of banks in the state having federal charters. (JEL codes: G2, G20, G21) For the time period from 1943 through 1981, relatively few banks failed because of insolvency. This situation changed dramatically beginning with the year 1982, during which 42 banks were closed, followed by 48 closings in 1983 and 79 closings in 1984. The number of closed banks increased sharply thereafter, hitting 200 closings in 1988 and 206 in 1989 and surpassing 119 closings per year through 1992. Indeed, the bank closing rate in the U.S. did not decline significantly until after the implementation of provisions (such as risk-related deposit insurance and stricter capital requirements) of FDICIA, the Federal Deposit Insurance Corporation Improvement Act of 1991 [Cebula (1999)]. Commercial bank failure data in the U.S. reveal a very large interstate variation in bank failure rates. Indeed, the bank failure rate by state, especially during the 1980s and very early 1990s, differs widely among the various states. For example, for the 1982-91 study period (when the number of bank closings had especially intensified), there were eight states that experienced zero closings, whereas there were ten states in which the percentages of banks that failed reached double digits. In view of this widely varied interstate pattern in the bank failure rate and given the implications to depositors and taxpayers of bank closings/failures, it is important to determine whether regional factors played a role, especially if policymakers are to be properly prepared to prevent such closing problems in the future. Banks may engage in riskier activities when they have access to higher levels of federally insured deposits while having very low if not negligible or even negative net worth, as was so often the case in the 1980s and very early 1990s [Barth (1991), Barth and Brumbaugh (1992), Cebula (1993; 1999)]. What is less understood is the reason some banks may engage in such behavior, whereas others may not. Given that closing rates differ so widely among states permits analysis beyond bank-specific variables to assess the impact of regional economic factors. This analysis permits analysis of whether some states experienced bank failures because of their regulatory environment or were simply lucky to have avoided an adverse economic circumstance. This study empirically analyzes bank closings by state for the period 1982-91. Given (1) that values for seven of the observations on the dependent variable in the analysis are zero and (2) the need to control for heteroskedasticity, we adopt the Heteroskedastic-TOBIT estimation technique [cf. Cebula, Barth, and Belton (1995)]. Investigations of insolvencies among various types of financial institutions in the U.S. have been conducted by a number of scholars [for example, Amos (1992), Barth (1991), Barth, Brumbaugh, and Litan (1992), Barth and Brumbaugh (1992), Brumbaugh (1988), Cebula (1993; 1999), Chao and Cebula (1996), Kane (1985), Loucks (1994), and Saltz (1994; 1995)]. While many of these studies have focused on the problems of savings and loans (S&Ls), the empirical analysis of banks has certainly not been lacking [Amos (1992), Barth, Brumbaugh, and Litan (1992), Loucks (1994), and Saltz (1994)]. Based to some significant degree on Amos (1992), Barth, Brumbaugh, and Litan (1992), Cebula, Barth, and Belton (1995)), Loucks (1994), and Saltz (1994), this study focuses on four categories of factors that have been isolated as potentially influencing bank failures: 1. purely financial market factors: the cost of deposits [ACBCD]; 2. other economic factors: unemployment rates [UN], the average growth rate of gross state product over time [AGSP], the inflation rate of housing [HINFL], and the a percentage of gross state product derived from oil and natural gas extraction [OILNG]; 3. state regulations on branching: such as whether unit banking is the regulation in a state [UNIT]; 4. bank charters: the extent to which banks in each state have federal charters [PFEDCH] rather than state charters. In this study, it is hypothesized that the higher the cost of deposits over time [ACBCD], the lower is the bank profit rate over time [Bradley and Jansen (1986) and Saltz (1994)]. Accordingly, the higher the cost of deposits to a bank, the greater the likelihood that over time the bank will fail [Barth, Brumbaugh, and Litan (1992), and Saltz (1994)], ceteris paribus. Arguably, the higher the unemployment rate in a state [UN], the greater the probability of loan defaults and (perhaps) foreclosures [Saltz (1994)] and hence of bank financial stress and, ultimately, of bank closings, ceteris paribus. Next, states with more rapidly growing levels of gross state product [AGSP] are more likely to be environments with both (1) fewer loan defaults over time and (2) more rapidly growing demand for bank services and, as a result, fewer bank closings [Amos (1992)], ceteris paribus. In addition, more rapid inflation rate of housing prices [HINFL] would tend to reflect a more vibrant housing market and potentially therefore a more vital economic environment [Chao and Cebula (1996)]. Such an environment would be likely to be associated with fewer bank failures, ceteris paribus. The oil-price situation during the 1980s may also have been an important factor in affecting the performance of banks. Prices of crude petroleum, for example, dropped significantly during the period 1980-1985 and indeed were halved from 1985-1986.
Strategic Market Planning in Romania: Implications for Practitioners
Dr. Victoria Seitz, California State University, San Bernardino, CA
Dr. Nabil Y. Razzouk, California State University, San Bernardino, CA
Romania’s recent invitation to NATO and an American president’s visit symbolize the growing potential of this former Communist country. Although globalization of western brands is prevalent in Romania, there is evidence of a lack of strategic planning for these brands that would aid in developing brand loyalty in the Romanian market. The authors identify strategic guidelines for consideration of the Romanian market as well as other Central and Eastern European nations that will enhance brand loyalty through its life cycle. Since the fall of Communism in the late 80’s Central and Eastern European countries have grappled with the transition process toward a free market economy with varying results. The Russian Federation has seen great growth in industrialization and an average per capita income of $500 as a result (Starobin and Belton, 2002). In the Czech Republic the per capita income has increased to an average of $1,000 per month through their transition efforts (Park, 2002) Western European and American companies have been instrumental in seeing these countries through to a market economy by introducing brands to the marketplace via exports, subsidiaries, joint ventures, mergers and acquisitions. Hundreds of western brands are now available through globalization to a market that was literally cut off from the rest of the world. Some of the top brands include Coca Cola, Microsoft, IBM, General Electric and Intel (“The 100 Top Brands”, 2002). However, in the race to gain market share in these newly opened markets, the globalization of brands has taken a turn for the worse. Rather, than taking the time to understand the market, marketers have utilized economies of scale in getting their piece of the pie. This is particularly true in the country of Romania, an associate member of the European Union, which was recently invited into NATO and enjoyed a visit by an American President November 2002. Brands are advertised in English, low wage workers are exploited in producing low quality products for the Romanian market with prices remaining unaffordable for the majority of the marketplace. Globalization has been misinterpreted when considered synonymous with the world being one big homogeneous market with the same needs and wants or universalization (Davis, www.infed.org). Rather, Taggart, Berry and McDermott (2001) note that globalization is now an interdependency among national economies and business structures.” Davies (www.infed.org) states that globalization is an intensification of social relationships whereby local happenings, such as employment, are shaped by events occurring many miles away.” In neither perspective is globalization defined as a homogeneous market, but rather, as a world that is an interconnected marketplace that is culturally diverse. In developing global strategic plans, marketers must look at both the external environment and their internal resources. When planning for Central or Eastern Europe strategic development must go further. Czinkota, Gaisbauer, and Springer (1997) suggest that marketers “should sense an obligation to help restructure society and improve the standard of living in this region.” Rather than analyzing current structures in the market place such as political climate, culture, and the economy, marketers must understand the country’s former political circumstances and their effect on the people. Change in these countries is constrained due to years of ideological pressures that were fundamentally opposed to marketing of any kind (Czinkota, Gaisbauer, and Springer, 1997). But change is possible. Starobin and Belton (2002) reported that Russians are just starting to trust other Russians in business arrangements. In reviewing the political climate of communism, Lavigne (1999) stated that it had no way of dealing with the economic volatility beyond the party’s administration. Only in China has communism succeeded to this day. However, in other countries, basic consumer needs were satisfied on a minimal level from housing to services. Products and services offered to consumers were shabby and the selection was poor. Waiting in line for limited product offerings was a common site as the government focused funds on military growth and elimination of debt. In most communist countries one thing led to another: huge subsidies to offset minimal consumption led to shortages and other ways of overcoming these shortages. Public services, such as hospitals and schools, were neglected due to a very limited investment in the consumer sector by the government (Lavigne, 1999). Only in Yugoslavia was economic activity directed at self-management (Lavigne, 1999) Romania was part of the communist block from the conclusion of WWII to 1989. In December of that year the leader of one of the worse communist regimes and his wife were shot, after a makeshift court found them guilty during the revolution. Nicolea Ceausescu, became General Secretary of the Romania’s Communist Party in 1965 and his rule became one of the most notorious and ruthless dictatorships in the world. His domestic policies were marked by frequent disastrous economic schemes ultimately resulting in an increasingly repressive and corrupt government (Holman, www.ceausescu.org). According to Holman (www.ceausescu.org) Ceausescu was regarded as a maverick communist due to his opposition to the Czechoslovakian invasion. However, as a Stalinist, his policy to create a homogeneous socialist population out of the traditional peasant Romanian people was inflexible and eventually eroded the country. To simplify control over the people he designed large urban centers, uprooting families to cities to work in factories. Further, the creation of a heavy industrial base was to make Romania self-sufficient and eliminate all foreign debt. Empty shelves of food and other consumer goods were common in stores as a result of Ceausescu’s plan to eliminate the foreign debt. At this time economic growth fell from 10 percent to three percent, yet Ceausescu did not modify his policy. Instead he resorted to coercion and to publishing misleading production outcomes.
Taxonomy and Remedy of Work Hazards Associated with Office Information Systems
Dr. Haidar M. Fraihat, King Fahd University for Petroleum and Minerals, Saudi Arabia
The use of information technology in the workplace implicates both positive and negative impacts on knowledge workers, depending on the manner in which the technology is utilized. Potential negative impacts include deskilling, repetition of work tasks, excessive monitoring of workers, as well as a spectrum of potential health hazards and job injuries. Through the use of interviews and investigating selected organizational and medical archives, the paper revealed that OIS-related job hazards are different from other job hazards. Mental, emotional, sociological and psychological harms are more related to OIS-caused job hazards and that combating these problems requires a collective effort of all stakeholders determined by the paper. The paper argues that several methods could be used to combat OIS-related hazards. They include awareness creation, training, following proper ergonomics procedures and standards, increasing R&D spending by IT industry, and modernizing government legislations. In addition, the paper provided a set of activities knowledge workers need to undertake in order to mitigate this modern-day organizational predicament. Information technology products have expedited the transformation of our world from the industrial age to the information age. The unprecedented mushrooming of the production and use of information has increased the global reliance on knowledge and hence the prevalence of information society. The unprecedented increase in the worldwide number of knowledge workers, most of them execute daily tasks inside offices and make use of at least one type of IT products has brought to attention new unfamiliar forms of work hazards in this seemingly comfortable work environment. Information technology has eliminated many tedious or hateful tasks that formerly had to be performed by people manually. For example, word processing and desktop publishing made producing office documents a lot easier to do, while the prevalence of computer networks (intranets, extranets and Internet) has reduced the need for employees to physically travel between offices and buildings. This allowed knowledge workers to concentrate on more challenging and interesting assignments, upgrade the skill level of the work performed, and create challenging jobs requiring highly developed skills within computer-using organizations. Thus, office information technologies have enhanced the quality of the work environment because it can upgrade the quality of working conditions and the contents of work activities. Off course, some jobs created by information technology, data entry for example, are quite repetitive and routine. Some of these jobs draw criticism because they require continual repetition of elementary tasks, thus forcing a worker to perform like a machine rather than like a skilled craftsperson. Many automated operations are also criticized for relegating people to a standby role, where workers spend most of their time waiting for an infrequent opportunity to push some buttons. Such work patterns implicates negative effects on employee well-being, hence the quality of work in the organization. Notwithstanding any other objective, the primary objective of any corporation is to make profit. Likewise, government organizations have the objective of minimizing the cost of service delivery. Work hazards, being health, social, moral or ethical imply certain costs to any organization. Regardless of its nature, the ramifications of these hazards result in lowering organizational productivity and increasing the health and insurance bill. According to the Joyce Institute of Seattle, strains, sprains, tendonitis, and other problems account for more than 60% of all occupational illnesses and about a third of workers’ compensation claims. In the United States of America, it was estimated that these types of health problems consume about US$ 27 billion annually of the corporations’ budgets. Claims for Repetitive Motion Disorder (RMD), a disorder linked to misuse of keyboards, have increased greatly lately. Similar trend is observed with Repetitive Stress Injury syndrome (RSI) which results in the inability to hold items, and Cumulative Trauma Syndrome (CTS) which is the aggravation of the pathways for nerves that travel through the wrist (the carpal tunnel). Legal costs can be catastrophic as well. For example, a law suit in the USA threw out a US$ 5.3 million verdict against Digital Equipment Corporation in a keyboard-injury case. In a single verdict, a judge upheld a US$ 274000 award by the jury for repetitive stress syndrome. The objectives of this paper are six-fold. To (1) provide a taxonomy of job hazards associated with office information systems, (2) analyze the nature of work hazards associated with the use of office information systems, (3) assess their impact on knowledge workers and organizations, (4) provide suggestions on how management should deal with this problem and (5) provide guidelines for all stakeholders (employee, management, organization, government, IT industry, and society) on how to deal with OIS-caused job hazards and maximize productivity through optimizing the interaction between knowledge worker and their work environment, and (6) provide some academic insights into this vital area of research, previously dominated by applied research conducted by the IT industry.
Palto Ranjan Datta, City Business College, London, UK
Uncertainty and Learning in University-Industry Knowledge Transfer Projects
Dr. Abdelkader Daghfous, American University of Sharjah, Sharjah, U.A.E
This paper presents the results of an exploratory case study that investigates the effects of learning activities and uncertainty, perceived by the recipient firm, on the benefits to that firm from university-industry knowledge transfer projects. The goal of this case study is to explore such relationships using data from a system development project. While successful knowledge transfer indicates a high level of tangible and intangible benefits to the firm, uncertainty is measured in terms of the perceived lack of technical and organizational knowledge. Although the nature of this study is exploratory, the results obtained provide valuable insights for future empirical research, as well as useful prescriptions for more successful knowledge transfer projects. The ability of firms to generate and integrate new knowledge is an important source of competitive advantage. Whereas knowledge per se is not new, recognizing its value and learning how to manage it effectively is new; and has given rise to the knowledge management (e.g., see Alavi and Leidner, 2001) and the knowledge-based perspective of the firm (e.g., see Steensma and Corley, 2000). Indeed, a technology transfer project is essentially a knowledge accumulation task, which Gupta and Govindarajan (2000) further disaggregated into knowledge creation, acquisition, and retention. In contrast, Davenport and Prusak (1998) argued that the knowledge transfer process consists of transmission and absorption, culminating in a behavioral change by the recipient firm. Knowledge transfer has not only been a conceptual extension of technology transfer, but it has also emerged as one of the most important and most researched activities and processes in knowledge management. Much of the research on technology transfer has been directed to the processes of learning and transfer, especially in the context of knowledge transfer projects (e.g., Leonard-Barton, 1995), R&D collaboration (e.g., Amabile et. al., 2001), and strategic alliances (e.g., Mowery, Oxley, and Silverman, 1996; and Sen and Egelhoff, 2000). Both streams of literature, on technology transfer and organizational learning, increasingly recognize and provide empirical evidence of the intangible (i.e., spillover or unintended) benefits that are achieved through learning activities performed by the recipient firm during a knowledge transfer project. Several studies have sought to illustrate how firms can fully exploit a knowledge transfer relationship so that tangible as well as intangible benefits accrue to the firm. For inter-organizational collaborations in general, important intangible benefits are primarily of the learning type, such as learning how to transfer knowledge across alliances and learning how to locate the firm in capability enhancing network positions (Powell, Kogut, & Smith-Doerr, 1996). Most such studies followed Cohen & Levinthal’s (1989) seminal work, which provided empirical evidence that R&D not only generates new knowledge for the firm, but also enhances its ability to assimilate and exploit existing knowledge. They argued that R&D provides a spillover benefit, which consists of enhancing the firm’s ability to learn from external sources of knowledge and, subsequently, its ability to create new knowledge. This case study builds on such research by describing a university-industry knowledge transfer project from a US based research university to a nearby private company. The objective of this case study is to explore the following questions: (1) how do learning activities undertaken by the recipient firm increase benefits of the knowledge transfer project? And (2) how does uncertainty associated with the new system to be transferred and the organizational impact of that system affect the relationships between learning activities and the benefits of the transfer project ? Past research investigated the effects of different types of perceived uncertainty in a variety of contexts. For instance, Song and Montoya-Weiss (2001) found that technological uncertainty has a significant moderating effect on the outcome of new product development projects. Meanwhile, Waldman et. al. (2001) found that environmental uncertainty moderates the relationship between leadership characteristics and firm performance. Both studies drew heavily from Milliken’s (1987) conception of perceived environment uncertainty, which she defined in terms of the ability of individuals to understand the direction of change, the potential impact of such change, and the likelihood of success of a particular response. In this study, uncertainty, perceived by the recipient firm during the initial phase of the project, is conceptualized as a lack of knowledge about the new technology (or system) and its impact on the organization (Daghfous and White, 1994). Technical uncertainty is taken as negatively related to the level of familiarity the recipient firm has with the features and science underlying the knowledge being transferred. In contrast, organizational uncertainty is taken as negatively related to the recipient firm’s level of familiarity with the potential impact of the new knowledge on the organization, its existing set of skills, and systems. This study focuses on the potentially moderating role of uncertainty in the context of university-industry knowledge transfer projects. The technology transfer literature typically addresses barriers and facilitators to effective transfer mostly in terms of the characteristics of the new technology (Fleischer and Tornazky, 1990; Leonard-Barton and Sinha, 1993), the nature of the knowledge being transferred (Simonin, 1999), the source-recipient communication (Cusumano and Elenkov, 1994), and the cultural differences between the partners (e.g., see Mowery, Oxley, and Silverman, 1996). In comparison, the organizational learning literature addresses those issues and adds in-depth analyses of factors such as system thinking (Senge, 1990), institutional and social dysfunctions (Kofman and Senge, 1993), anxieties that affect the speed of learning (Schein, 1993), and methods to create a learning organization (Garvin, 1993). The organizational learning literature also provides the technology transfer process with new dimensions that can be used to attain a more complete conception of it. For instance, incorporating organization memory (Huber, 1991) and mental models (Senge, 1990) in the study of technology transfer adds complexity and depth that is necessary to address issues such as "unlearning" unproductive procedures and cultures (Imai, Nonaka, & Takeuchi, 1989).
Leadership Simplified: Abandoning the Einsteinian “Unified Field Theory” Approach
Dr. William Burmeister, Elizabethtown College, Elizabethtown, PA
“Leadership Simplified: Abandoning the Einsteinian ‘Unified Field Theory’ Approach” breaks with the latest integrative models of leadership; embracing instead the axiom of Occam’s razor. This philosophical precept, firm in the belief that the fewer assumptions an explanation depends on - the better it is; suggests that leadership can be best understood by the simplest and most obvious criteria: the type of power employed. The long history of formal leadership study has charted a myriad of courses since its inception and there are a sizeable number of treatments of this topic. One recent article estimates there have been more than 3,000 studies of leadership in the past seventy years (Schriesheim, Tolliver, and Behling 1983). Stogdill’s 1974 compendium of research on leadership drew on over 3000 selected references and the revision by Bass (1981) added another 2,000. Given this large amount of material on leadership and the study of it, one might expect well established conceptualizations and definitions of the topic (Pfeffer, 1971). This however, is not the case. The definition itself tends to vary depending upon the orientation and purpose of the author; and as Stogdill (Yukl, 1989, p. 252) observed, “There are almost as many definitions of leadership as there are persons who have attempted to define the concept.” Leadership was conceived by Mintzberg, among others, as the vertical relationship between managers and their subordinates. Blake and Mouton (1981) explain leadership as the managerial activity that maximizes productivity, stimulates creative problem solving, and promotes morale and satisfaction. Locke, (1991) on the other hand, maintains leaders establish the basic vision of an organization and managers implement that vision. Leadership, in many ways, remains an enigma, a much discussed but little understood phenomenon (Burns, 1978). One of the earliest trait approaches for studying leadership is often referred to as “The great man theory”. Myths, legends, and biographies told of leadership characteristics manifesting themselves in the early life of great men. The kinds of traits studied in this early research included physical characteristics, personality, and ability. Although the old assumptions that leaders are born has been discredited, it is recognized that certain traits may predispose one to the role of an effective leader, but nothing more. The Behavioral Approach focused on specific acts or events of leadership; it described what leaders did. The Ohio State University Leadership Studies and the University of Michigan Studies on Leadership are the two most well known examples. Behaviors could be objectively observed, measured, and hopefully learned. Different groups of researchers set out to identify leader behaviors and subsequently developed a list of almost 2,000 such behaviors (Hemphill and Coons 1957). Unfortunately, the links between those behaviors and leadership effectiveness were not clearly established (Nahavandi, 2000). The third basic approach was the Contingency Approach, where situational factors were beginning to shape our perspective of leadership. The contingency theories of leadership assume that leaders adapt their behavior to the requirements, constraints, and opportunities presented by the situation. The earliest model, and still regarded as one of the best, was published by Fiedler in 1967. He suggested that leaders be placed in situations that best fit their leadership abilities and behaviors. Unfortunately, because the LPC (least preferred coworker) concept is based upon primary motivation, “leaders” cannot simply change their task-motivated or relationship-motivated behaviors to match the situation at hand. Today’s research is an odd mix of old and new approaches; interest in the models of the past is being renewed while the complex interactions of behavior, style, situational structure, follower maturity, motivation, flexibility, and adaptability are also being examined. This latest approach appears to reject a simplified view of leadership. Unfortunately, much like Einstein’s unsuccessful efforts in creating a "unified field" theory that would unite electromagnetism, gravity, space, and time; leadership research appears to be following that same path. In an attempt to clarify what is according to some, “a complex process that results from the interaction among a leader, followers, and the situation” (Nahavandi, 2000) perhaps we should look at the very fundamental aspects of leadership: vision, inspiration, and followers. It is generally conceded these factors typically define leadership and lie on the periphery of management. Bennis (1989, p.17) argued “Many an institution is very well managed and very poorly led. It may excel in the ability to handle each day all the routine inputs yet may never ask whether the routine should be done at all”.
Examining a Singapore Bank’s Competitive Superiority Using Importance-Performance Analysis
Alvin Y. C. Yeo, University of Western Australia, Leicester, Australia
In increasingly liberalized financial environments, financial service providers are recognizing the importance of understanding consumers’ bank selection and product purchase decision. To this end, this paper diagnoses the competitive superiority of a major local bank in Singapore using the Importance-Performance matrix. Insights on broadly, bank selection criteria and specifically, housing loan selection criteria are obtained based on survey findings from 187 banking customers. Implications of these findings for financial service providers are also discussed. Against a backdrop of increasing competition, shifting customer requirements, unprecedented technological change and the recent wave of deregulation (Houston et al., 2001; Ridnour, Lassk & Shepherd, 2001; Hooley & Mann, 1988; Ennew, Wright & Thwaites, 1993), financial service providers are re-examining the role of marketing in their organisations (Price & Longino, 2000). As a consequence, financial institutions are compelled to identify clear competitive positions in the market place (Devlin, Ennew & Mirza, 1995) and refine their marketing mix strategies. This would allow them to realise superior customer value, which invariably drives customer loyalty (Dodds, Monroe & Grewal, 1991; Sweeney, Soutar & Johnson, 1999). Indeed, research has shown that consumers are less loyal and are more willing to switch providers when dissatisfied (McDougall & Levesque, 1994). They are more price sensitive and this is amply evidenced in surveys indicating that though segments with different needs continue to exist, all consumers now seek competitive interest rates (Krishnan et al., 1999; McDougall & Levesque, 1994). Multiple banking relationships are also common today because consumers have become more task oriented and less interaction oriented (Holstius & Kaynak, 1995). These issues are further compounded in the financial sector because there is a high degree of information asymmetry between buyers and sellers (Devlin & Ennew, 1993). This, in turn, is attributed to the intangible nature of the financial services; they are processes rather than objects (Shostack, 1982; Bowen & Schneider, 1988), they lack physical form and are complex and difficult to understand (Bateson, 1977). Devlin and Ennew (1997) aptly argued that financial services was an experience or act rather than a physical product, and often part of the offering was the advice or guidance given, as well as the product features themselves. Although the regulatory environment in Singapore now allows an extremely wide range of product offerings, profitability dictates that a selective offering must be positioned to appeal to specific markets (Laroche & Taylor, 1988). Ries & Trout (1986) emphasises the fundamental importance of perception to positioning in the service provider’s “battle for your mind’. Hence, germane to this paper, which revolves around the study of consumer behaviour in the financial sector, is a thorough and balanced assessment of the reasons for the competitive position of a major local bank in Singapore (herein called “LBank”) in attracting housing loans. LBank is a particularly appropriate context for this application because it has positioned itself to be the “premier consumer bank of the new millennium” and the fact that it garnered a high market share in the consumer loans market is widely seen as a basic move to strengthen that image. Accordingly, this paper details a model to diagnose LBank’s perceived competitive superiority in the housing loan market vis-à-vis its key competitors. We, therefore, structure the paper as follows: the next section reviews prior literature on bank selection criteria and alternative decision evaluations, which grounds the subsequent development of Importance-Performance frameworks. We then present the research methodology and detail the findings. Finally, the managerial implications of the study, research limitations and avenues for further research are discussed. As competition for individuals’ loan intensifies, banks have to adopt a more proactive approach in attracting more customers. Specifically, banks have to identify patronage criteria used by customers and examine the usefulness of these factors as criteria for market segmentation and the design of patronage appeals. Anderson et al. (1976) developed fifteen bank selection criteria which includes recommendation by friends, reputation, availability of credit, friendliness, service charge on checking accounts, interest charge on loans, location, overdraft privileges on checking accounts, full service offering, parking, hours of operation, interest payments on savings accounts, special services for youths, special services for women, new account premiums or gifts. In addition, their research further identified two market segments: the convenience-oriented bank customers and the service-oriented bank customers. The former regards banking services as convenience goods and undifferentiated, placing relatively little importance to any patronage criteria. Nevertheless, they viewed friends’ recommendations, location, reputation, friendliness and service charges on checking accounts as being relatively more important selection criteria. On the other hand, the service-oriented bank customers singled availability of credit as the most important determinant of bank patronage, followed by reputation, friends’ recommendations, friendliness and interest charges on loans. This segment places emphasis on bank image and financial considerations such as the bank charges, availability of credit and overdraft privileges. Lending further credence to Anderson et al. (1976) findings is the study by Tan & Chua (1986) who reported that friends, neighbors and family members had a strong influence when choosing financial institutions. This finding, according to Haron, Ahmad & Planisek (1994) is consistent with the ethos of oriental culture, which emphasizes social and family ties.
Teaching Outside the Box: A Look at the Use of Some Nontraditional Teaching Models in Accounting Principles Courses
Dr. Gary Saunders, Marshall University, Huntington, WV
Dr. Jill Ellen R. Christopher, Ohio Northern University, Ada, OH
Paradigms for teaching accounting have been evolving at a fairly rapid pace in the last decade. The Accounting Education Change Commission (AECC) has been a leader in calling for changes. They, and others in the accounting profession, have issued statements addressing the structure and objectives of accounting principles courses. These statements have stressed the need for innovative teaching approaches and the importance of incorporating active learning and team learning (group work) into accounting principles courses. Conventional wisdom might suggest that as the size of accounting principles classes increases, there will be less likelihood that instructors will use innovative teaching methods, and that students will have fewer opportunities to experience active learning and team learning projects. In order to determine relationships between accounting principles class sizes and the relative level of programs’ use of nontraditional teaching models, a questionnaire relating to the teaching of accounting principles was sent to 325 chairpersons of accounting departments in the U.S. Results indicate that the majority of schools use team learning and computer assignments, but that they do not require students to attend laboratory – i.e., active learning – sessions, or to complete simulation projects. Accordingly, the results suggest that while most schools are doing a good job of addressing the need for computer work and group work in accounting principles courses, more effort should be made to include active learning activities in these courses. Paradigms for teaching accounting have been evolving at a fairly rapid pace in the last decade. The Accounting Education Change Commission (AECC) has been a leader in calling for changes. A variety of pronouncements within the accounting profession (AECC, 1990; AICPA, 1988; Arthur Andersen & Co. et al., 1989) have emphasized encouraging more student involvement in the learning process (Caldwell et al., 1996). These pronouncements, in essence, stress that: 1. The student should be an active participant in the learning process, 2. The student should be taught to identify and solve unstructured problems that require use of multiple information sources, 3. Learning by doing should be emphasized, 4. Working in groups should be encouraged and, 5. That the creative use of technology is essential. As university accounting programs try to accomplish these objectives, the question of class size must be a consideration. Conventional wisdom suggests that larger universities tend to have larger class sizes and smaller universities, smaller class sizes. A relevant question that can be posed is whether an accounting program can meet the above objectives--specifically, those of group work and active learning--in larger classes as well as it can in smaller classes. If they cannot and, if larger universities do indeed have larger class sizes, then smaller schools may be better equipping students to be accountants. This study attempts to quantify the relationship between accounting class size and the use of nontraditional teaching models, and to identify the relative use of these models in accounting principles courses. Research dealing with the effects of class size on learning has had mixed results. Hancock  compared test scores of students in six small (average 39 students) statistics classes with those in three large classes (average 118 students) and found no significant difference. Similarly, Hill  found no significant difference in the performance of students in two small accounting principles classes (maximum of 42 students) and that of students in two large classes (maximum of 120 students). When the data were controlled for attendance and university GPA, students in the large classes actually outperformed those in the small classes. Glass & Smith  found that large class size had a negative effect on student performance and other researchers [e.g., Williams, Cook, Quinn & Jensen, 1985] found a negligible effect. Nachman and Opochinsky  found that large class size had no effect on overall performance but did have a significant negative effect on specific measures of performance such as quizzes or final exams. With respect to the use of different class activities for different class sizes, Siegfried and Kennedy  found no evidence to suggest that introductory economics instructors varied their teaching strategies. A total of 178 different classes taught by 121 different instructors at 49 different colleges and universities was considered. Class sizes ranged from eight to 277 students. The lack of adaptation of pedagogy is cause for concern, since conventional wisdom suggests that a class of eight students presents a very different environment than a class of 277 students. Research has demonstrated a negative relationship between class size and students’ evaluation of teaching quality. Mateo and Fernandez  considered student evaluations from 1,157 different classes with class size ranging from three to 498 students and found that increasing class size was accompanied by lower teaching evaluations. Another study [Fernandez, Mateo, et. al., 1998] analyzed the responses to questionnaires administered to students in 2,915 classes of different sizes varying from one to 234 students. Again, a significant relationship was found between class size and evaluation of teaching quality. Students’ attitudes toward large classes may be fraught with frustration and confusion because they cannot understand how to accomplish classroom goals [e.g., expected performance levels]. This in turn, may cause the instructor to use more consideration and structure in an attempt to clarify the goals and expectations for classroom performance. One study [Siegfried & Kennedy, 1995] found that when classes were grouped into two categories [small and large classes] significantly less time was spent lecturing, and more time is devoted to answering student questions in smaller classes. Therefore, it appears that students in small classes have more control of the classroom agenda and, on a per student basis, the time a student is involved in active recitation declines sharply with increasing class size. Larger class sizes may discourage a student from being an active participant in the learning process. Research results are mixed on whether the objectives set forth by the AECC and others can be met in larger classes as well as it can in smaller classes. No evidence that larger schools had larger accounting principles classes was found, and concise answers to the above questions are not available at this time.
Dr. Stuart Locke, University of Waikato, Hamilton, New Zealand
Dr. Frank Scrimgeour, University of Waikato, Hamilton, New Zealand
In December 2002 the Ministry for Economic Development, in New Zealand (NZ), commenced an online benchmarking service for Small-Medium Enterprises (SMEs). This is part of a series of government initiatives to improve the economic performance of the sector. As financial ratios play an important part in financial benchmarking an empirical test of the distributional properties of financial ratios for NZ SMEs was undertaken. This study samples 3,811 NZ SMEs, providing a wide cross-section of industry types. The results found that very few financial ratios were normally distributed and that comparisons across industries, and within most industry groupings, are likely not to be statistically valid. The study was replicated on two separate years of data and the findings were similar. The purpose of this study is to consider the extent to which financial ratios are consistent between industry groupings for small to medium size enterprises (SMEs) in New Zealand. It is traditional wisdom in accountancy that financial ratios are useful and popular folklore, such as the “current ratio should be at least 1”, often pass without question. This research is important for analysts seeking to appraise the robustness of a business’ financial position; identify growth opportunities; identify signs of business distress; and determine the value of the business. Benchmark ratios have been found to be of assistance to a number of organisations, including banks, accountants and business advisors. The Management Research Centre (MRC), of the University of Waikato in New Zealand, has compiled benchmark financial ratios, based on surveys of businesses, for 25 years. The extent to which these data are useful for analysts is the crux of this paper. As the importance of business advisory roles, directed toward SMEs, expand it is essential that there is a clear picture as to how useful financial ratios are for professionals seeking to use them as benchmarks. The Australian Society of CPA’s small business advisor observes that the area affords “large opportunities for business advisers (Clayton 1999, P. 44)”. The growth of business development groups among Chartered Accountants in New Zealand, seeking to expand their practice income through providing value added support for SMEs is indicative of this trend. A capacity to identify the features that distinguish SMEs that are most likely to grow is also of importance for government business development policy in New Zealand. The recent advent of an online benchmarking service provided by the New Zealand Government’s Ministry for Economic Development (MED) (http://www.med.govt.nz/ irdev/ind_dev/firm-oundations/questionnaire is part of government’s initiatives to improve this growth engine for the economy (Knuckey et al, 2002). Recent work by McMahon (2000) considers the potential to use financial data to cluster manufacturing firms into growth stages. Vos and Ochoki (1997) investigated the financial structure of business to see whether there is an observable difference between large and small firms. The usefulness of such research studies can be enhanced through an increased appreciation of the heterogeneity of financial ratios for SME. Typically, financial accounting texts identify specific ratios that are likely to meet the needs of various interest or stakeholder groups. Although slight variations exist with regard to categories, naming and inclusion of particular ratios, there does tend to be a general consensus. Conventional classifications are likely to include profitability, efficiency, liquidity and gearing ratios. Various alternative names for ratio groups are used in the literature and the divisions of users, and indeed what might be useful to specific classes of users, also varies between authors. Nevertheless, there appears to be a broadly accepted schema, along the lines of that presented in Table 1 adapted from Cooper et al (1998). The prediction of financial failure has received extensive coverage in the accounting and finance literature. Since Altman’s (1968) study, the use of financial ratios in distress prediction has become standard. Ball and Foster (1982) provide an extensive survey of the work to that date. Subsequently, additional procedures such as logit and probit analysis were applied to financial ratios in attempts to enhance the predictive accuracy of models. Gentry et al (1985) use probit and logit based cash-flow models, achieving a slight improvement upon basic ratio based results. Shah and Murtaza (2000) successfully apply a neural based clustering model to predict bankruptcy. They comment, however, “it must be acknowledged that the results are limited to the computer and software industry, and a particular set of financial ratios (P. 84)”. In New Zealand the evidence suggests there continues to be a high rate of SME failure (Statistics NZ, 1997). Ministry of Commerce (1999, p12.) reports that, “Of all small businesses started up in 1995, 71 percent survived the first year, 56 percent survived the second year and 47 percent survived the third year into 1998.” Financial statement analysis is central to business valuation and business lending decisions (Foster 1978, Palepu et al 1997, and Belkaoui 1999). The relationship between accounting variables and the value of businesses has been a recurring theme in the literature. Several studies have utilised share price information and applied earnings response coefficient methods. Hodgson and Stevenson-Clarke (2000) suggest this form of research “demonstrates that earnings have valuation and market information content (P. 37)”. They conclude from their investigation that the “results indicate that the relationship between share prices, accounting earnings and cashflows is influenced by firm financial leverage, and demonstrates the importance of placing valuations in a contextual balance sheet setting (P. 59)”.
An Empirical Study About the Use of the ABC/ABM Models by some of the Fortune 500 Largest Industrial Corporations in the USA
Dr. Raj Kiani, California State University, Northridge, Northridge, CA
Dr. M. Sangeladji, California State University, Northridge, Northridge, CA
Activity-Based Costing and Activity-Based Management models have received considerable attention by both academicians and practitioners in the past 20 years. However, the attributes and shortcomings of the models have not been yet adequately tested. The main purposes of this research were (1) to determine the level of the usefulness of the models in practice, (2) to understand the attributes and obstacles of the models in the real-world environment, (3) and to recommend whether the concepts should be included in the business school curriculum. To accomplish these objectives, a survey questionnaire comprising 21 questions was sent to controllers/managers of all 500's largest industrial corporations in the U.S.A. From 500 questionnaires, 85 were found usable for analysis. Out of 85 companies, 41 did not use ABC/ABM models in their operations and the 44 remaining companies had used the models at various level. Analysis of the responses received from the 44 companies revealed useful information about benefits, attributes, and obstacles regarding the adoption and implementation of the ABC/ABM models. The major obstacles, among others, included inadequacy of management support, unwillingness of people to change, shortage of competent personnel, and complexity in process design. The conclusions reached in this research were (1) since 44 out of the 85 participated companies (over 51%) had used the models, colleges and universities should continue to include these models in their curricula and (2) because only 8 out of 44 companies had used the models over 5 years, further empirical research and studies are needed to evaluate the degree of usefulness of the models after an extended application. "Activity Accounting" was first introduced and used by the Tennessee Valley Authority in 1940s. The concept, with some modifications and improvements, has been presented in various accounting and management texts and literature as "Activity-Based Costing" (ABC) and "Activity-Based Management" (ABM). A review of managerial and cost accounting literature indicates that these tools have received considerable attention by academicians as well as practitioners in the past two decades. As stated by one author, “never in the history of accounting has an idea such as activity of accounting moved so quickly from concept to implementation.” Advocates of ABC and/or ABM models argue that these tools assist management to better understand product costing, identify business opportunities, develop performance measurements, and above all improve a company's profitability. Others believe that the models are not that exceptional, and consequently, "not the panacea for all ills." Despite all the theoretical support, attention, and criticisms directed to the ABC and ABM models, there is still need for more empirical studies to understand the real and practical attributes, benefits, and shortcomings of the models. It is especially important to understand the type of problems and obstacles encountered by those who have used or attempted to use the models and to search for possible solutions for the encountered obstacles. The main purposes of this study are to understand the practical attributes and obstacles of the ABC and ABM models in the real-world environment and to determine whether those concepts, which are currently taught in Cost /Managerial courses, are being used in practice. To accomplish these objectives, a survey questionnaire comprising 21 questions was sent to controllers/ managers of the Fortune 500’s largest industrial corporations. The following issues were included in the questionnaire: It was expected that the primary operation, method of operation, number of employees, volume of sales, frequency in modifying/enhancing products and type of cost allocation applied by companies could have some impact on the use and the degree of usefulness of the ABC/ABM models. To determine the nature and degree of such impacts, the following issues were included in the survey questionnaire: The company’s primary operation: The company’s primary method of operation.The company’s number of employees.The company’s sales volume in dollars.The company’s frequency of modifying/enhancing or redesigning products.The type of cost-allocation methods used by the company for assigning costs to products and services. To understand the extent and the level of use of ABC/ABM by various companies; to appreciate the experiences, opportunities, and obstacles encountered by the companies in applying ABC/ABM; and to solicit recommendations from those companies that had tried ABC/ABM in their operations, the following concerns were included in the survey questionnaire: The extent and the level of use of ABC/ABM by the company at the present. The reasons for not implementing the ABC/ABM models by the company. The length of time ABC/ABM has been used by the company. The individual or departments who made the decision to adopt the ABC/ABM for the company. The depth of ABC/ABM implementation in the company. The benefits that had been realized by applying ABC/ABM. The factors that had discouraged or (hampered) the company in applying ABC/ABM. The extent of groups’ involvement in developing and implementing ABC/ABM. The extent of groups’ success in applying ABC/ABM. The costs of developing and implementing ABC/ABM. The company’s position about recommending ABC/ABM to others. To obtain information about our respondents, the following questions also were included in the survey: Who completed the survey and what was their job title? What is the functional area of the respondent? For any survey, the most important issues are the selection of the survey population and the sample size. The population and the sample size for this research were both defined as the 500 presidents, controllers, or managers of the Fortune 500 largest industrial corporation in the United States as of October 1999. This population and the broad sample size were used in order to elicit feedback from presidents, controllers and/or managers of corporations financially and operationally capable of using the ABC and/or ABM in their companies. On October 19, 1999, each president of the Fortune 500 largest industrial corporations was sent a survey questionnaire along with a cover letter explaining the research project and assuring confidentiality. A 20-day period was allowed for the presidents to reply. Then a follow-up letter with another copy of the survey questionnaire and a stamped return envelope were mailed to each non-respondent. Replies received up to April 21, 2000 were included in the final analysis of the research. The first and second mailings resulted in 108 replies (21.6 percent of the 500 total). Of the 108 replies, 85 were considered usable and were included in the final data analysis. Twenty-three non-usable replies included incomplete data and were omitted in the final data analysis. SPSS 10 software was used to statistically analyze the data. The descriptive outcomes are presented next.
Competing Size Theories and Audit Lag: Evidence from Mutual Fund Audits
Dr. Charles P. Cullinan, Bryant College, Smithfield, RI
The existing audit lag literature identifies three theories for why client size may affect audit fees: (1) that larger clients have shorter audit lags because they can prepare their financial statements more quickly (the client preparation theory), (2) that larger clients have shorter lags because auditors are more willing to complete the audit quickly to retain larger clients (the client service theory), and (3) that larger clients have more transactions to audit, resulting in longer audit delays (the transactions theory). Mutual funds are required to prepare financial statements daily, eliminating any delay caused by the client’s financial statement preparation time. There are also measures of fund transactions that are separate from traditional measure of client size, allowing for an discrete examination of the client service and transactions theory. Results of this study provide mixed support for the client service theory in finding that fund assets was negatively related to audit lag, while other measures of the potential incentives for client service were not significantly related to audit lags. Results of the study also provide mixed support the transactions theory. Research on the length of time between the end of a client’s fiscal year and the audit report date (audit lag) has produced competing theories for the effect of client size on audit lag. Some theories suggest that larger clients will have shorter audit lags because larger clients have better control systems which enable them to prepare their financial statements more quickly. Larger clients are also theorized to have shorter audit lags because they may have priority within accounting firms in competing for limited audit resources. An alternative theory suggests that larger clients will have longer audit lags because they have a greater number of transactions which will take the auditor a longer time to examine. Due mainly to data limitations, the existing audit lag literature has not developed empirical measures which clearly distinguish between these theories. The mutual fund audit market, however, has a number of characteristics which permit testing of these size hypotheses. Much audit lag literature has been based on samples which included firms from many different industries (e.g., Newton and Ashton 1989; Schwartz and Soo 1996; Knechell and Payne 2001). Newton and Ashton (1989) suggest that “... a more refined analysis of audit delay for particular industries could be informative.” By focusing on one particular industry, a richer understanding of the components of audit delay can be developed. The objective of this paper is to develop and test a model of audit lag for mutual fund audits incorporating measures of differing size theories and industry audit characteristics. The remainder of this paper is organized as follows. The next section discusses why the mutual fund market is appropriate for an audit lag study. This is followed by the development of an audit lag model in the mutual fund industry. This model development section is followed by a discussion of the research methods used to test the model, and the results of model testing. The paper closes with a discussion and limitations of the results, and conclusions. Due to the unique nature of mutual funds, the audit lag of mutual funds does not include the time required by the client to prepare the financial statements. The audit lag measure used in most previous studies is the elapsed time between the end of the client’s fiscal year end and the date of the audit report. As shown in the time line in Figure One, this time measure includes three components. The first component of audit lag is the time required by the client to close the books and prepare the financial statements. This component is called “client preparation time.” The next component of audit lag is the time elapsed between the date the financial statements are ready and the commencement of the year end audit (henceforth called the “pause” portion of audit delay). This pause component is similar to Bamber et al.’s (1993, 4) notion of the “Incentives for Timely Reporting,” and Henderson and Kaplan’s (2000, 164) “Incentives for Timeliness.” The last audit lag component is the time required to conduct the audit’s year end testing, called auditor completion time. This component is similar to the “Extent of Audit Work Required” (Bamber et al. 1993, 4; Henderson and Kaplan 2000, 164) component in existing audit lag models. Figure One does not intend to suggest the length of any of these components, or that the components do not necessarily overlap. For example, some audit testing could be performed while the client was still working to prepare the financial statements. Many previous studies have recognized these components in an implicit manner. For example, Simnett et al. (1995) recognized the distinction between the time required by the client to prepare the financial statements, and the time required by the auditor to audit the financial statements. In their discussion of the effects of client size on audit delay, they note that larger clients are likely to have better internal control which will allow “...faster preparation of the financial statements to be audited” (Simnett et al. 1995, 4) Carslaw and Kaplan (1991) imply a recognition of the pause component of audit delay in their discussion of the idea that auditors will “... start...the audit in a timely fashion” for their larger clients (Carslaw and Kaplan 1991, 23). Similar assertions suggesting a three component model of audit delay are found throughout the audit lag literature. In most previous studies of audit report lag, the client preparation time was not clearly measurable. If the client preparation time varies across clients, it is difficult to know how this component of audit lag may have affected the audit lag. As such, it is usually not possible to determine whether the size variables found to affect audit lag influence the time required by the client to prepare the financial statements, the delay in starting the audit once the financial statements are ready, or the time necessary to conduct the audit or some combination of these factors. Open-ended mutual funds are required to close their books and calculate financial statements every business day.
An Empirical Analysis of Where Firms Choose to Emit and Corresponding Firm Performance
Dr. Jeffrey L. Decker, University of Hawaii at Hilo, Hilo, HI
Dr. Terrance Jalbert, University of Hawaii at Hilo, Hilo, HI
This paper explores the firm-level, state and federal characteristics that explain pollution emissions during 1988-1996. Differences in pollution approach between different types of firms and the states in which they operate provide an unique research setting to investigate how firms respond to differing levels of state environmental regulation, what effect a change in regime at the federal level has on firm pollution control, how firms with favorable environmental reputations compare to firms with unfavorable environmental reputations and what firm characteristics are related to environmental performance. The results indicate that government regulation influences where firms choose to emit. The results further indicate the firms that emit more of their emissions in pro-industry states have organizational slack available to meet the increase in federal environmental regulations. Moreover, firms with favorable environmental reputations did not reduce emissions significantly more than firms with unfavorable environmental reputations. The impact of environmental concerns on U.S. businesses has grown dramatically over the past 20 years. Firms have responded to changes in environmental regulations and changes in public opinion in various ways. Some firms take a proactive approach and develop positive environmental reputations. Others take a reactive approach and are seen as having unfavorable environmental reputations. These differences in pollution approach between different types of firms provide a unique research setting to investigate how firms with favorable environmental reputations compare to firms with unfavorable environmental reputations regarding emissions and what firm characteristics are related to environmental performance and where firms choose to emit. This study is the first to use emissions information from a non-financial source to analyze differences between firms to changes in regulatory environment and where firms choose to emit. Managers and firms are driven by economic incentives. As concern for the environment grows, environmental issues have become part of the company’s economic decision-making process. Firms that are seeking to develop a green strategy do so because they benefit from being green. These benefits can stem from reputational effects, reduced operating costs, reduced regulatory penalties or other effects. The reputational effects of being green can increase sales to consumers who make choices based on a firms’ environmental policy/products. The use of reputation as a part of a company’s strategic plan is not new. Firms have long sought an image that would reflect well upon their operations. Many firms have spent years developing quality or value based reputations that would attract customers and investors to their companies. Despite their original strategic focus, companies frequently find that they cannot maintain their overall favorable image without also addressing the environmental impact of their operations. If there are positive economic rents being obtained by firms based on their positive environmental reputations, these will provide incentives for other firms to lessen their impact on the environment. Russo and Fouts (1997) analyze environmental performance in high-growth firms, finding that “green” firms have higher economic performance. They compare firms that rely on short-term, end-of-pipe pollution control (non-green) to firms that have a strategy of focusing on pollution prevention (green). They find that environmental reputation, as measured by environmental ranking, is related to firm profitability. This study builds on Russo and Fouts (1997) by using a different sample of firms labeled green and non-green to examine profitability differences, it also compares the relationship of economic performance across different regulatory regimes and across the states that the firm emits in. The introduction of the Toxic Release Inventory (TRI) database in 1987 provides new pollution data to researchers. Jaggie and Freedman (1992) examine pollution performance and its effects on economic and market performance in the pulp and paper industry. They find that green firms are less profitable and have lower stock prices than non-green firms. The Jaggie and Freedman finding raises the question of why firms would choose to be green. Limitations to the Jaggie and Freedman study include analysis of only one industry, the use of only water pollution data and reporting biases of firms filing with the EPA. In this study, total emissions are examined from a sample of firms in various manufacturing industries. The reporting bias may still be present since these firms’ self-report emissions data. However, EPA officials, state officials and others audit facilities check for compliance of the reported TRI data thus, increasing the incentives for all firms to report data as accurately as possible. In addition, this study lengthens the time frame of the analysis by using nine years of data from the TRI database (1988-1996), which allows for improved analysis over time. Numerous studies reveal a relationship between firm behavior and governmental regulations (Petroni and Shackelford 1998; Patten and Nance 1998). However, these studies do not analyze firms’ behavior based on where they pollute. Nevertheless, states have different environmental regulations and an analysis of where firms choose to emit will provide important information for firm managers and for policymakers. The extent to which environmental regulations have been implemented and reauthorized has changed with Presidential Administrations. During the Reagan and Bush Administrations, political changes slowed the reauthorization of many environmental statutes. The Bush Administration stated two main objectives for the environment:
American Business Objective: An Alternative Approach
Dr. Ioannis N. Kallianiotis, University of Scranton, Scranton, PA
This mostly general, theoretical, and philosophical work tries to point out some of the existing problems in our corporations (American Businesses) and to suggest a few long-term remedies. The social objective of the corporate firm is emphasized, subject to moral, ethical, and just social constraints. Also, a preliminary attempt of business valuation, risk, and return has been undertaken. At the end, a lot of discussion is made for corporate governance, control, and regulations and a few suggestions, by using an alternative approach, are given. Humanity has gone a long way, in its known history of about seven thousand years, to learn, improve, value and reach its ultimate objective. It has accumulated physical and human capital, it has established businesses which can contribute to production of goods and services, to job creation, to wealth generation and ultimately to social welfare maximization. It has improved its intellect through observations, experiments, knowledge, philosophy, schooling, exchanges, and revelations. It has created a value system with whom measures assets, wealth, behavior, factors, risk, time, and anything valuable that constitute our cosmos (=ornament). It has established governments, institutions, laws, and regulations, moral, ethical, and just systems, which oversee, direct, and administer all this valuable creation that has only one objective, to perfect all human beings. Lastly, since early 1960s this harmony, justice, value-oriented, and advanced world has shown signs of deterioration, revaluation, corruption, demoralization, and lost of control. Before being too late, we have to revise as many as we can from those “wrong doings” and we start from the American corporate firm. We experienced (and we felt the pain)(1) a decade of speculations in our financial markets, societal euphoria, and demotion of every real-value. Investors were bedazzled by the promise of new technology,(2) but imagination, science fictions, reality, and expectations are different things. A sharp correction became inevitable, accentuated by internal imperfections, such as, slack accounting standards,(3) abuse of stock options, huge borrowings of CEOs, corruptions, insider-trading,(4) mergers, restructuring,(5) soft-money to politicians,(6) offshore subsidiaries to avoid taxes,(7) bribing foreign governments,(8) and so on, and by external ones, such as, lack of regulations. Besides, new data painted a darker picture of the U.S. economy and the global one, especially for South-American countries. The Commerce Department made extensive revisions to last years’ data, most notably indicating that the 2001 recession was longer and deeper, with the economy shrinking in each of the four quarters of 2001 and the first two of 2002.(9) Commercial construction was slumping, and state and local governments were tightening their belts. Many industries could have huge problems in the near future, due to the current and foreseen disorder, like insurance, transportation, tourism, etc. and they need major restructuring and revisions. The recent and current problems are unique, exceeding our expectations, but hopefully not out of control. We need new ways to assess values and estimate an asset’s value, too. The world stability is declining every day and it seems we do not have the knowledge to do something in this area. Political problems that affect the authority of our democracy, like the impeachment of President Clinton and the disputed election for the first time in a century. National security issues, like the first attack on the U.S. mainland since the war of 1812. The first recession in about ten years and the corporate scandals that question the entire social-economic-political system. All these problems have increased uncertainty and have caused another serious cost in our over-costly lives; they have increased at least insurance costs. Rates started to go up last year following the end of a decade-long price war that drove down prices, they soared in the aftermath of the September 11, 2001 terrorist attacks, which is expected to cost insurers $50 billion.(10) Companies and individuals are betting today that it will cost them less to pay some claims out of their own pockets than to pay these outrageous premia. The bombing of Yugoslavia, the war in Afghanistan, the Middle-East crisis, now the second war with Iraq which appears inevitable (after all this pressure), and the North Korean crisis have increased the risk for Americans, the anti-Americanism around the world, and might have a distractive effect on America’s leadership. Lately, the American public’s confidence in our “value free”-market system of wealth creation (and maximization) and risk diversification (and minimization) has been severely damaged. Trillions of dollars of market value securities have been lost, and risks in all sectors and aspects of our lives have grown. Investors are hesitant to place their hard-earned dollars in shares of U.S. corporations after so many revealed scandals. They cannot trust them anymore. The Enron up to WorldCom, Adelphia, Qwest, Global Crossing, Tyco, ImClone, Arthur Andersen and all the other episodes made all too clear that the free market must be less free, more control, and much more value-oriented because as it is now is disrupting the lives of millions of investors, employees, and citizens, in general. We need an alternative approach for our businesses. Bill Grasso, chairman and CEO of the New York Stock Exchange tries to build trust in our troublous financial markets (11) with the followings. “We are in the midst of great changes. We’re constantly studying the changes and the impact on all market participants, and we’re committed to taking steps necessary to prevent further failures of trust. Our primary goal is to do what’s best for our least-sophisticated investor. We believe that when we do what’s right for the individual investor, we’re doing what’s right for our market and, ultimately, the nation’s economic well-being.”(12) The stock-market plunge will affect the real sector of the economy (wealth effect) and will derail the expected recovery. Also, the volatility of the stock market has increased the uncertainty and consequently, the risk in our markets. The paper contains six sections. Section 1 introduces the problems that American businesses had lately. Section 2 presents an alternative corporate firm objective. Section 3 provides a basic valuation of our businesses and market indexes. Section 4 discusses business risk, cost, and return. Section 5 considers the serious issue of business governance and control. Lastly, section 6 yields some concluding remarks of this work.
Education and Tax Policies for Economic Development: Taiwan’s Model for Developing Nations
Dr. Raymond S. Chen, CPA, California State University Northridge, Northridge, CA
Dr. James S. H. Chiu, CPA, CMA, California State University Northridge, Northridge, CA
Economic development in Taiwan over the past forty years has been truly miraculous. Since 1960, per capita gross national product increased from US$154 to over US$14,216, which translates to an increase of over 192 times. Worldwide, Taiwan has become the nineteenth largest economy and fifteenth largest trading economy. Although there are many factors attributing to the economic growth in Taiwan, this paper identifies the most significant policies that assisted in propelling Taiwan’s prosperity. These policies consist of both educational policies that fostered human resource development and tax policies that further stimulated economic development. In addition to what has already been done by the government of Taiwan, this paper also addresses other governmental strategies to further promote economic and social development. Many developing nations have devoted significant efforts in formulating strategies to foster their countries’ economic development. These developing nations usually focus on highly developed nations, such as the United States of America, for reference in modeling their strategies. This paper presents Taiwan for consideration by developing nations as an example of strategic planning for economic development. Taiwan is a mountainous island, with the Central Mountain Range running from north to south and providing a major watershed between east and west. The mountain range occupies more than half of the island. Scores of peaks rise above 10,000 feet with the highest being 13,113 feet. Around the mountainous area are numerous independent hills, with an average height of 5,000 feet. Taiwan, including the offshore Penghu islands, covers an area of 35,981 square kilometers or 13,892 square miles. Rivers in Taiwan are wide, but short. They are mostly shallow or dried up in the dry season, while there are floods in the rain-bearing wind season. Soil of alluvial origin on the plains and in the valleys covers about one-fourth of the island and is its chief resource. The upland soils, subject to drastic erosion, are acid and infertile. There are limited mineral resources. With its many mountains, Taiwan has abundant timber. However, with the low quality of timber, inaccessibility, and high costs of production, import of lumber has become necessary. Fifty years ago, Taiwan was basically a rural and insulated society, as the modernization that had occurred under Japanese rule had been lost during World War II. When the Nationalist government of the Republic of China lead by Chiang Kai-shek moved its seat to Taiwan in 1949 at the time of the Communist takeover of mainland China, economic development in Taiwan was at a virtual standstill due to civil war. With the migration of many Chinese mainlanders to Taiwan in 1949, its population had grown rapidly to over 20 million in 1989. And currently, Taiwan’s population has risen to approximately 22 million people. The distribution of the population is influenced by the island’s terrain as the coastal plains and basins in the west are agriculturally cultivated areas where the population is dense due to transportation and industrial development. The population density in this land reaches over 2,500 per square kilometer, one of the highest in the world. Therefore the economic development in Taiwan depended on sound governmental policies that focused on development of its most precious human resources. And ultimately, education is the most important factor in the development of these human resources. The educational system in Taiwan is divided into three stages: primary, secondary, and higher education. Primary education consists of six-year elementary and three-year junior high school, for a total program of nine-years of free education. Under the current educational code, the government establishes an elementary school in a village if there are more than eight school-aged children in that village. In 1998, the percentage of school-aged children enrolled in elementary schools reached 99.94 percent and 99.60 percent of these elementary school graduates enrolled in junior high schools. Furthermore, 93.94 percent of junior high graduates continued on to their secondary education.  The secondary education consists of three-year senior high or three-year senior vocational schools. Senior high schools focus on general education and are designed to prepare students for enrollment in colleges and universities. Senior vocational high schools provide students with practical knowledge, technical skills and work ethics in order to meet the national needs for trained trade and industry personnel. Each vocational high school has its specialized field of studies, such as agriculture, engineering, business, or home economics. Some vocational high schools were restructured to become junior colleges with three-year senior high and two-year junior colleges, a total of a five-year program. Although there are private high schools in Taiwan, the high schools are predominately public schools. In 1998, approximately 67.43 and 24.74 percent of graduates from senior high and vocational schools were enrolled in colleges and universities.  In 1998, there were 137 colleges and universities in Taiwan with 915,920 students, of which 53,870 were graduate students. . All students interested in studying in colleges and universities, including medical and law schools, are required to take the national college entrance examination, which is administered once a year.
Analyzing the Regime-Switching Behaviors on Exchange Rates and an New Test for PPP: -An Empirical Study on the Exchange Rates of Two Major Industrial and Four Asian Developing Country Currencies
Dr. Ming-Yuan Leon Li, National Chi Nan University, Taiwan
This study adopts Hamilton’s Markov-switching model (hereafter MS model) to examine and compare the regime-switching behaviors of exchange rates of two major industrialized country currencies including GBP and JPY and four Asian developing country currencies including NTD, KOW, THB and PHP. An alternative innovation of this paper is to establish a specification that incorporates the MS model and purchasing power parity (hereafter, PPP) and examine whether the PPP has marginal predictive power for the exchange rate returns after accounting for state-dependent switching. Our empirical findings are consistent with the following notions. First, the regime-switching behavior are much (less) statistical significant in the exchange rates of developing country currencies (Industrial country currencies). Second, the finding provides the evidence that PPP is pronounced in the high- (low-) volatility regime of industrial country currencies (developing country currencies). Third, the switching in variance with asymmetric PPP established by our paper outperforms the competing models in statistical and forecasting performances. Hamilton (1988) extended Goldfeld and Quandt (1973)’s models to establish Markov switching techniques as a setting for picturing financial and macroeconomic time series data. The features of Hamilton (1988) approach are the parameters depended on a discrete-state Markov process, and we can use them to analyze the occasional and discrete shift of return variables. Many prior studies had been adopted Markov switching models to analyze the influence of economic and political events on the macroeconomic and financial variables. By examining the exchange rate returns, especially after 1970s, one can easily find the volatility are substantially greater during some periods. Thus, we believe that one should not take the overall sample period variance as a constant. Moreover, assigning dummy variables to partition the sample period as various phases to control the different volatility levels requires a subjective designation of cutoff dates, and cannot effectively predict the timing for structural changes. Consequently, to control how the exchange market behaves, we may gain from a model that can self-sufficiently partition different regimes and verify the timing of structural change based on historical data. Following this above line of thought, many prior studies adopt Hamilton (1988) Markov switching models to analyze the exchange rate returns. These studies includes: Engle and Hamilton (1990), Bekaert and Hodrick (1993), Engel (1994), Engel and Hakkio (1996) as well as Nicolas, Stephen and Robert (2000). In contrast with the prior studies always analyzing the industrial country currencies, one of the features of our paper is that we are more interested in examining the developing country currencies, and comparing the differences between the two types of currencies. One of our concerns are that most industrial countries adopt floating exchange rate systems in contrast with most developing ones adopting fixed exchange rate systems. Would these make the existence of regime-switching behaviors in industrial country currencies being more significant than developing ones? The alternative issue of this study is to examine the evidences of PPP (Purchasing Power Parity) on exchange rate returns. PPP is one of the most important theories for determining exchange rates. PPP also was a very reliable guide to the relationship among exchange rates and national price levels in the 1960s. However, unfortunately, many prior studies including Krugman (1978), Genberg (1978) and Frenkel (1981) demonstrate that PPP has not help up well since the early 1970s because many exchange rates were beginning to be more market-determined. Moreover, these prior studies also show the exchange rates were fixed within narrow internationally agreed margins through the intervention of central banks in the foreign exchange market, but often deviated from PPP after early 1970s. Link up the above discussions, one can easily find the recent exchange rate returns are much more volatile and deviate from PPP and many prior studies also demonstrate there exist obvious regime-switching behaviors in exchange rate returns. In this study, therefore, we want to address and examine the following problems. First, could we find the more strong evidence of PPP on exchange rate returns during the more volatile or the less volatile periods? Moreover, could PPP have the marginal statistic significance and predictive power for exchange rate returns after regarding the state switching process? For answering the above problems, this paper sets up an extension of Hamilton’s model to analyze the exchange rate returns and examine whether PPP are significant hold and has marginal predictive power in exchange markets after accounting for the regime-switching behaviors. To the best of our knowledge, we are the first study to address and examine these issues on the exchange rate returns. In the next section, we examine and compare the regime-switching features in the various currencies. In section 3, we establish a specification that incorporates the MS model and PPP to examine whether PPP has marginal predictive power for the exchange rate returns after accounting for state-dependent switching. In session 4, we provide some economic explanations for our empirical results. Section 5 concludes this study. In this paper, we adopt six monthly US dollar exchange rates including the two major industrial country currencies: Great British pound (GBP), the Japanese Yen (JPY) and the four Asian developing country currencies: the New Taiwan Dollar (NTD), the Korea Won (KOW), the Thailand Baht (THB), and the Philippines Peso (PHP). The data period is from January 1980 to January 2001 for a total of 208 observations. Data source is Taiwan Economics Journal (TEJ).
Marketing Research and Information Systems: The Unholy Separation of the Siamese Twins
Dr. Z. S. Demirdjian, California State University, Long Beach, CA
With the advent of the personal computer, there has been an explosion in the information technology. Against the backdrop of increasing demand for literacy in information systems since 1970s, universities began to offer information systems (IS), also known as management information system (MIS), courses usually in a separate departments, carrying the same name as IS or MIS. Almost all of the courses offered in the IS department are designed to process, store, and retrieve secondary data. In the age of information, management needs primary data to keep abreast of the constantly changing competitive environment. Myopically, the IS department does not require a course in research methodology to prepare the IS student to generate primary data and be able to appraise data produced by others. On the other hand, marketing department does offer such a course titled Marketing Research. When a student majoring in IS graduates, he or she would lack the conceptual knowledge and the requisite skills in either conducting a systematic and objective research study to generate information for aid in making business decisions or in evaluating the accuracy of data produced by someone else. The focus of this paper is to demonstrate by means of a model that Marketing Research and IS are congenitally joined together like the Siamese twins whose unholy separation would shortchange the IS student. Additionally, recommendations are made to correct the shortsightedness and the deficiency of the IS curriculum in order to prepare students for the real world, who will be well rounded in dealing with both secondary and primary data management and usage. The environment of business is constantly changing to incorporate new technologies for conducting exchanges more efficiently and strategically in order to obtain differential advantage. With the advent of the personal computer in the 1970s, a revolution has taken place in the landscape of business. As a result, the demand for literacy in the information systems has experienced a trajectory rise in the last several decades. Virtually, every university has established an Information Systems (IS), Management Information Systems (MIS), or Computer Information Systems (CIS) department to fill an ever expanding demand for graduates with IS orientation. Several dozens of courses are being offered in the IS department of various universities. These courses prepare students mainly to process information as a secondary data. When it comes to generating primary data, IS students lack the requisite knowledge and skills in research methodology to conduct a systematic and objective study. Even if the students would not be required to engage in some sort of research, they would need the same knowledge and skills in research in order to be able to evaluate the accuracy of the data being processed. In the face of rapidly changing business environment and information technologies, today’s responsibilities of an IS professional range not only throughout the boundaries of the company but also throughout the entire interconnected network of suppliers, customers, competitors, and other entities located around the world.. Stair (1997) maintains that this broad scope offers IS processionals a new challenge: how to help the organization survive in a highly interconnected, highly competitive, international environment? In the information era, they are fast becoming the stewards of business and industry. Since the IS professional has begun to play a pivotal role in the ongoing survival of the organization, he or she should possess skills commensurate to the professions newly acquired responsibilities. One such critical empowering competency would be in the form of knowledge of research methods. After a brief introduction to the ever increasing need for IS educated and trained graduates, the common foundation of IS in the company is presented to point out a major deficiency in its components, which curtails its viability as a source of timely and accurate information for management decision making; then, the role of marketing research is discussed to show how it is destined to remain together with IS; finally, some recommendations are made to improve the IS program in the hope of producing well rounded IS majors to meet the challenges of the dynamism in the information age. IS majors have to take a core course in marketing. In Principles of Marketing, these students are exposed only to one single chapter on marketing research and information systems. For all practical purposes, they end up with insufficient background in research methods to either conduct research to produce primary data or to be able to evaluate data produced by someone else. Upon close examination, it was found that the IS department does not offer a course which covers research methodology, while the marketing department offers several such courses. For example, Principle of Marketing and Marketing Management each contain a chapter on research. Marketing department, furthermore, offers an entire course in research titled Marketing Research on both undergraduate and graduate levels. According to Jessup and Valacich (2001, p. I-6) “Information Systems are combinations of hardware, software, and telecommunications networks which people build and use to collect, create, and distribute useful data [sic], typically in organizational settings.” IS should distribute information and not data for decision making, but that is not the issue here. The issue is that it clearly states that the IS “create ..data.”
Income Squeeze on New Zealand Universities
Dr. W. Guy Scott, Massey University, Wellington, New Zealand
New Zealand universities receive a government grant determined by the number of full-time equivalent students (EFTS) enrolled in programmes of study that qualify for Ministry of Education funding. Over the last twenty years, real Ministry of Education subsidy per EFTS fell by 36% or at an annual average reduction of 2.3% and, numbers of EFTS per academic staff member grew from 12.5 to 19.0. The proportion of income from the Ministry of Education fell from 73% in 1991 to 46% in 1999, the shortfall being met by higher student fees and revenue from entrepreneurial activity. As a nation, New Zealand spent US$3,192 less per EFTS than did Australia in 1995. If present policy settings are unchanged, numbers of students per staff member will continue to increase, and universities may have trouble maintaining the quality of teaching and research. Rising fees payable out of pocket by students will increase the difficulties for those from lower socio-economic families to access university education. Failure to address problems of funding, access and retention of a young skilled workforce will reduce New Zealand’s ability to compete internationally. Key words: Education, university, funding, public policy, economics. Historically the New Zealand Ministry of Education has been the dominant source of income for New Zealand universities. Universities receive a government grant based on the number of full-time equivalent students (EFTS) enrolled in approved programmes of study that qualify for Ministry of Education funding. (The appendix provides information on government funding categories and rates.) Revenue is also derived from fees charged to students and from research grants and entrepreneurial ventures. All state-owned universities now operate under the same business model and attempt to maximise Ministry of Education funding and fees income by competing for students. State owned universities have since 1991 been behaving as rival firms competing intensely for market share and revenue. Real revenue per student derived from the Ministry of Education has fallen. In response, universities have attempted to reduce operating costs by allowing student numbers per academic staff member to rise, and have sought to replace lost revenue by increasing student charges and boosting income from other sources. The following statistics are provided to aid international comparisons. In 1999 New Zealand had seven universities (8 in 2000). Universities ranged in size from 2,594 to 21,869 full-time equivalent students with a combined total of 89,115 equivalent full-time students. in 1999New Zealand had 3.830 million residents in the year ending March 2000.GDP NZ$ million 103,857 (GDP per capita $NZ 27,000) in the year ending March 2000. The exchange rate at the end of March 2000 was $NZ 1.00 = $US 0.50. This paper investigates trends in university revenue per equivalent full-time student (EFTS) between 1980 and 1999 and compares the funding of New Zealand universities with funding of universities in other OECD countries for which data were available. Ministry of Education funding data, numbers of students and prices and wage rates were obtained from a range of official sources discussed in the appendix. A university input price index (UPI) was constructed by linking and weighting the most appropriate wage and non-labour cost indices available from Statistics New Zealand. These indices and the method of construction are described in the appendix. The UPI was constructed by calculating a weighted mean of (1) the wages index and (2) the non-labour input cost index. Both these indices are specific to education. The weights represented the respective proportions of operating expenses accounted for by wages and by other input costs. Nominal expenditure per EFTS net of goods and services tax (GST) was then deflated by the UPI to derive funding per EFTS in constant 1999 input prices and provide a measure of the volume of inputs funded per EFTS. Information on sources of funding and staff student ratios were obtained from the New Zealand Vice‑Chancellors' Committee (2000). As GST is a transfer payment (collected by universities and paid to government) and is not a part of university costs, it was removed from the expenditure series. For international comparisons, expenditure per university EFTS and GDP per capita for the 1995 year (both converted into US$ using purchasing power parities) were obtained for 17 OECD countries (OECD, 1998). Real Ministry of Education funding per EFTS fell at an annual average rate of 2.3% between 1980 and 1999. The first half of the series (1980 to 1990) recorded an annual average fall of 1.5% and the period 1991 to 1999 recorded annual average reductions of 2.8% (chart 1 and table 1). Between 1980 and 1999 while real Ministry of Education funding per EFTS in 1999 NZ$ fell by $3,821 (36%) the EFTS to staff ratio increased from 12.5 to 19.0, an increase of 6.5 students per academic staff member or 52%. The proportion of university operating revenue from the Ministry of Education fell from 73% in 1991 to 46% in 1999. Over this period revenue from student fees rose from 14% to 23% and the proportion raised from other sources rose from 13% to 30% (a greater increase than income from fees). (chart 2 and table 2).
Characteristics, Performance and Prospects of Emerging Stock Market of Oman: Facts & Figures for International Investors
Dr. Mazhar M. Islam, Sultan Qaboos University, Sultanate of Oman
The Emerging Stock market of Oman known as Market Muscat Securities Market (MSM) was established in 1989 with 71 joint stock companies and an equity capital of about US$ 700 million. There has been an explosive growth of the MSM (504 points) in 1997 and in first half of 1998 due to bank loans, borrowing from trust account, the liberalization of income tax laws, record amount of profits earned by companies, stable oil price, and the government’s plan for economic reforms. However, in September 1998 the MSM Index dropped to 280 points leading to the crash of the market. The market observers view this crash mainly because of the high speculative activity fueled by lack of transparency in the market, investors’ inability to separate the market price of a share from its intrinsic value, high rates of interest with low dividends, sharp decline in oil price, the government fiscal deficit, lack of proper monitoring & supervising systems; and the absence of required institutional infrastructure facilities. The government subsequent reform measures although stimulated the market transactions for a while, however the MSM has yet not recovered from its 1998 crash. It has been argued here that the prospects of the market depends on how quickly the government authority addresses some important issues in order to bring back the investors’ confidence. The investors need accurate information from the professional analysts regarding companies’ financial strengths with long-term prospects. This research suggests that the capital market development in Oman must be based on strong institutional infrastructure, viable private & public equity market, strong corporate governance mechanism, timely regulation & supervision, competent management, and sound economic policies. Government must maintain credible fiscal & monetary policies in order to foster stability, competition, and growth.. Financial accounting standards should encourage disclosure of the performance and solvency using internationally accepted accounting standards; and the payment & operating systems must function efficiently. The evolution and development of financial & capital markets around the world has followed two distinct paradigms: ‘Stock Market centric’ versus ‘Bank centric’. The ‘stock market centric’ system followed primarily by the U.S.A & the U.K, and depends on the availability of strong institutional infrastructure in the form of a mix of interdependent financial institutions that mobilise capital by involving in corporate governance issues. Although large banks exist, they play a relatively insignificant role in the corporate governance. On the other hand, the ‘bank centric’ capital market system, popular in Germany, Japan and most of the emerging market economies are characterised by significant role of banks, and a weaker institutional infrastructure. In this system, banks are primarily responsible for mobilising the surplus fund and play corporate governance role in monitoring & supervising the management. Mobilising capital in ‘stock market centric’ system has occurred through venture capitalists, hedge funds, pension funds, insurance companies as well as other non-bank financial institutions; and to a lesser extent by banks. On the other hand, in ‘bank centric’ markets, followed by most emerging market economies, mobilization of capital has occurred primarily through banks. Since the various institutions in the two paradigms differ considerably in their basic role, objectives, risk tolerance, investment strategy, investment flexibility, regulatory restrictions, and the level of investment, the approach to developing a viable securities market will differ according to the type of system in place. Among others, studies of Bollerslev (1987), Schwert (1989), and Islam (1998) show that the statistical behavior of returns of many high frequency speculative financial assets are skewed and leptokurtic. Moreover, their studies show that volatility is negatively correlated with past returns. Current research on volatility is usually linked to the notion of market efficiency, which has become increasingly popular in the finance literature since the mid eighties. Most recently Moursi (1999) examined the behavior of stock returns and market volatility in Egypt by using GARCH model. His investigation has shown that market volatility is considerably affected by past shocks associated with the arrival of news. Moreover, he argues that volatility is also affected by the past conditional prediction. Bekaert and Harvey (1997) argue that high fluctuations in volatility that increase uncertainty of capital asset returns and payoff risks may raise the costs of capital and delay investment decisions, thus leading to lower levels of growth and inflation. Their study also shows that Egyptian stock returns are characterized by high volatility. Claessens and Dasgupta (1994) have focused on analyzing the behavior of returns in emerging stock markets. Their study support high and positively skewed stock returns in those markets. El-Erain and Kumar’s (1995) paper undertakes a comparative analysis of equity markets in six Middle Eastern countries: Egypt, Iran, Jordan, Morocco, Tunisia, and Turkey. The quantitative indicators of their paper identifies the principal characteristics and structural features of these six markets. They have also investigated the informational efficiency of selected markets thus providing a basis for the subsequent review of policies for enhancing the role of equity markets in macro economy of Middle Eastern countries while minimizing risks.(for information on specific country experiences with stock market development, see Al-Erain & Kumar, 1995, and Claessens, 1995. For comparisons of market trading and information systems in developing countries, see Glen ,1995) Recognising the importance of financial services for economic development, the World Bank in 1980s began devoting an increasing effort toward improving the financial systems of countries to stimulate economic development and coping with financial crises that threaten economic prosperity. More recently, World bank programs have stressed the development of capital markets in general and stock markets in particular. However, the process of development of stock markets and their integration with global capital market is far less advanced in the Arab Gulf Council of Cooperation (GCC) countries. The group of six GCC countries is less active market compared to relatively more active market of Egypt, Jordan, and Turkey. The financial sector in AGCC countries is mainly dominated by commercial banks and thus ‘bank centric’.
U.S. Treasury Inflation Protection Bonds
Dr. Malek Lashgari, CFA, University of Hartford, West Harford, CT
The issuance of U.S. Treasury inflation indexed bond was a step forward in the pursuit of capital market completeness. In this manner, investors would be able to maintain their real purchasing power in any future states of inflation. These bonds are suitable and at times superior to regular bonds. For example, U.S. Treasury inflation indexed bonds would be an ideal investment in a defined benefit pension plan investment portfolio in which wages reflect inflation. Some features of these bonds though appear to be of concern. A portion of these bonds cash flows are accrued over time and paid at maturity. However, taxes are due on an annual basis causing a shortfall in periodic cash flows. Furthermore, the adjustment process to inflation is theoretically incomplete. This incomplete matching to inflation by U.S. Treasury inflation indexed bonds, however, provides an opportunity for introducing a derivative contract that may be used to modify the inflation adjustment process. Inflation indexed bonds were issued by the U.S. Government Treasury Department in January 1997. These bonds offer compensation for past inflation on the semiannual income and on the final principal maturity value. Similar to regular bonds, inflation indexed bonds provide cashable coupon interest income every six months. However, unlike regular bonds, these periodic incomes tend to rise over time according to the observed rate of inflation. The principal value of regular government bonds is stated on the face of the bond and will be paid at maturity. For an inflation indexed bond, due to adjustment for inflation, the added benefit is that its accrued principal value will be higher than its face value. If there is no inflation or when deflationary environment exists, U.S. Treasury inflation indexed bonds would pay the stated face value at maturity. Consider an inflation indexed bond with a stated coupon interest income of 3.375 percent and par value of $1000 with original maturity time of 10 years that was issued at $994.82. If inflation happens to be 2.5 percent during the first year, the year end principal value will be $1025 (i.e., 1000(1+.025)). The interest income for the first year will be $33.75 (i.e., 1000* .03375). The total return for the first year will be 58.75 (i.e., 33.75+25) of which $33.75 are received during the year and the remaining $25 are accrued and paid at the end of the tenth year. The effective return on investment during the first year is 5.91 percent (i.e., 58.75 / $994.82), producing a real return of 3.41 percent (i.e., 5.91 – 2.5). These results are shown in the first row of Table 1. As for taxes, the entire total return that amounts to $58.75 is subject to income taxes during the first year. The second year coupon interest income received will be $34.59 (i.e., .03375 * 1025). Assuming an inflation rate of 1.5 percent during the second year, the accrued principal value will rise to 1040.38 (i.e., 1025 (1+. 015)), and income earned due to inflation will be $15.38 (i.e., 1040.38-1025). This will result in an effective total return of $49.97 (i.e., 34.59 + 15.38), and return on investment of 4.88 percent (i.e., 49.97 / 1025). The effective real rate of return during the second year will be 3.375. Income subject to taxes during the second year amounts to $49.97. For a zero percent inflation during the third year, interest income received will be $35.11 (i.e., .03375 * 1040.38) with no adjustment made to the accrued principal, and return on investment would be 3.375 percent (i.e., 35.11 / 1040.38). If deflation prevails during the fourth year and thereafter, the interest received and earned due to adjustments for deflation as well as the accrued value of the principal would decline. Table 1 displays the results. Note that under severe deflation the interest received on this bond would have a floor of $33.75 with a principal value of $1000.00. This example shows that the effective real rate of return on this bond would remain the same during the life of the inflation indexed bond at 3.375 percent, which is its stated coupon rate. The first year effective real rate of return in this example is however slightly greater than 3.375 percent since the bond was bought at a slight discount to the par value. As shown in Table 1, income earned due to inflation that is reflected as an increase on the principal value during the first year is $25.00. While this is not received in cash, it is taxable. While investors would pay taxes on $25.00, its present worth is $18.54 (i.e., 25/(1+0.03375)9). Accrued principal value subject to tax during the second year is $15.38 with present worth of $11.79 (i.e., 15.38/(1+0.03375)8). It appears to be reasonable for taxes to apply to the present worth of the incremental value of the accrued principal instead of its distant future value. Treasury inflation-indexed securities were issued in the U.S. on January 29, 1997 with the term remaining to maturity of 10 years, maturing on January 15, 2007. The par value at the beginning (officially, January 15, 1997) was $1,000, with a stated, fixed coupon interest rate of 3.375 percent. Following a single-price, Dutch auction, the bonds were sold for $994.82 with a yield of 3.449 percent. Adjustments for inflation are based on the non-seasonally adjusted U.S. City average All Items Consumer Price Index for All Urban Consumers (CPI) published monthly by the government of the United States. The coupon interest income is adjusted to inflation by a lag of about two months. This is due to the complexities in measuring the monthly values of the CPI. Since trading takes place on a daily basis, an index value is constructed for the CPI by the Treasury Department, on a daily basis, taking into account past values of the CPI. As for example, the daily CPI for May 15 is computed as follows. This linear interpolation, as shown above, would help market participants in appraising the market value of this security. Computations regarding inflation adjustments are based on the published “CPI daily index ratio” as reported by the U.S. Treasury Department. For example, based on the July 15, 1997 CPI daily index ratio of 1.01085 (the base is January 15, 1997 with an index ratio of unity), the amount of the semiannual interest payment during the second half of the first year on the 10-year U.S. Treasury inflation indexed bond would be $17.06 (i.e., .03375 * 1.01085 * $1,000 /2). For more information, see the Treasury website at http://www.publicdebt.treas.gov, as well as Brynjolfsson, and Faillace (1997).
The Effects of Organizational Culture on Conflict Resolution in Marketing
Dr. Sungwoo Jung, State University of New York at Oneonta, Oneonta, NY
Most of the current research on conflict management, especially in marketing, does not consider the differences of each party’s organizational cultures. Resolving the conflict between manufacturers and suppliers becomes important in order to build long-term relationship. Four different types of conflict resolution approaches (Dant and Schul 1992) are hypothesized depending on the organizational cultures (Cameron and Freeman 1991) of manufacturers and suppliers. In spite of the importance of organizational culture, it has not been reflected in scholarly studies (detailed overviews are found in Ashkanasy, Wilderom, and Peterson 2000). This lack of development may be attributed to the relatively greater attention given to consumer than to organizational issues in marketing in general (Ruekert, Walker, and Roering 1985). Therefore, it became necessary to pay attention to organizational culture along with structural explanations for managerial effectiveness (Parasuraman and Deshpande 1984). Weitz, Sujan, and Sujan (1986) is one of the studies to include organizational culture into a model of selling effectiveness. In other way, it is believed that effective management control of marketing channels is vital to marketing planning (Lederhaus 1984). Conflict can so undermine the effectiveness of marketing distribution channels as to result in channel termination (Eliashberg and Michie 1984). The issues of conflict in channels of distribution have been much attention in marketing literature (Gaski 1984). Conflict is one of concepts to represent the work in behavioral dimensions of channels of distribution (Hunt, Ray, and Wood 1985). It appears that the nature and sources of the power possessed by a channel entity may affect the presence and the level of conflict (as well as other behavioral variables) within the channel (Gaski 1984). It is not difficult to imagine for companies with different organizational cultures to take different ways of resolving conflicts. However, there have been few studies on the effect of organizational culture on the conflict resolution. Organizational cultures can affect not only its performance but also its conflict resolution methods. However, previous research focused on causes of conflict rather than conflict resolution. The model of the effect of organizational cultures on conflict resolution is presented in this paper. Organizational culture is regarded as a principal explanatory variable to compare the functioning of American and Japanese firms (Pascale and Athos 1981). According to them, organizational culture can explain the difference in competitive effectiveness especially when there are small numbers of apparent differences in the structural characteristics of the organizations. Deshpande and Webster (1989), after reviewing more than 100 studies in organizational behavior, defined organizational culture as “the pattern of shared values and beliefs that help individuals understand organizational functioning and that provides norms for behavior in the organization” (p.4). In classifying organizational culture, Cameron and Freeman (1991) used two key dimensions. One axis is organic to mechanistic process. This means whether the organizational emphasis is more on flexibility, spontaneity, and individuality or on control, stability, and order. The other axis is relative emphasis on internal maintenance or on external positioning. Internal maintenance includes smoothing activities, integration, whereas external positioning includes competition and environment differentiation. The four culture types are labeled clan, hierarchy, adhocracy, and market. A market culture emphasizes competitiveness and goal achievement. Here, transactions are governed by market mechanisms (Ouchi 1980). Organizational effectiveness is measured by productivity achieved through these market mechanisms. A Clan type culture focuses on cohesiveness, participation, and teamwork. The commitment of organizational members is ensured through participation. Organizational cohesiveness and personal satisfaction are rate more highly than financial and market share objectives (Deshpande, Farley, and Webster 1993). An adhocracy culture emphasizes values of entrepreneurship, creativity, and adaptability (Deshpande, Farley, and Webster 1993). Flexibility and tolerance are important beliefs and effectiveness is measured in terms of finding new markets and new directions for growth. A hierarchy culture stresses order, rules, and regulations. Transactions are under the control of surveillance, evaluation, and direction. Business effectiveness is defined by consistency and achievement of clearly stated goals. It is important to note that these culture types are modal dominant ones rather than mutually exclusive ones (Deshpande, Farley, and Webster 1993). By implication, most firms can and do have elements of several types of cultures, perhaps between product groups even within the same strategic business unit (SBU). For example, a company can have adhocracy, market, and clan type culture even though there is only one dominant culture. The dominant culture becomes emerging over time and it must be only one. For the effectiveness of organizations, Quinn and Rohrbaugh (1983) proposed a competing values model. Based on an empirical analysis, this model suggested that the clusters of values produced dimensions consistent with past research in other disciplines. A common set of dimensions organizes the factor on both a psychological and organizational level, thus leading to a model of culture types.
Dr. Sankar Acharya, University of Illinois at Chicago, IL
Capital market gurus plead for universal banking with theoretically sound but practically fragile firewalls around special purpose vehicles (SVC) like financial conduits and trusts, while regulators continue to noose banks and massive risks pile on taxpayers due to credit derivatives. Fragile firewalls destroyed Enron and MCI-WorldCom and may implode banks. The solution proposed here is to create enough safe banks to serve panic-prone depositors, and to let the rest of the banks operate as universal banks without regulation. Safe banks invest exclusively in government securities, accept no more deposits than liquidation value of assets, and issue no liabilities (like debt) except common stock and preferred stock. Why should the government regulate commercial banks as in the U.S. and most other countries? American commercial banks were not regulated prior to 1933. In principle, a deregulated banking system should operate like any other industry in which companies raise debt and equity capital to fund business operations, return a fixed pre-set coupon interest to bondholders and distribute the residual profits to shareholders as dividends. Bondholders and shareholders take risks consistent with expected rates of return on investments. The expected rates of return may differ from promised coupon interest rates on debt or dividend payment rates on common stock. Investors choose how much to invest depending on their expected returns and risk tolerances. Like any other business, banks have stakeholders. A bank’s stakeholders include depositors, bondholders and shareholders who consciously choose investments like those in non-banking businesses. How are banks then different from non-banks? Does the difference naturally lead to regulation of banks? Banks fund their operations by borrowing very liquid demand deposits and other debt maturing in relatively shorter periods of time than the terms of projects they fund. Banks must pay claims from demand depositors, whenever such claims are submitted. Banks use some equity funds with indefinite maturity, but they fund real assets (projects) which are highly illiquid. Typical bank assets include home mortgage loans and business loans extended over as long as thirty years. Unless borrowers become delinquent, banks cannot demand repayment of outstanding balances on such loans, making these assets illiquid. To liquidate assets, a bank generally incurs large legal and other transaction costs. To sum up, banks realize returns from assets over longer terms, whereas they need to commit repayments to depositors and bondholders over shorter terms. This creates mismatch between maturities of bank assets and liabilities, unlike in non-banks. If all depositors and short-term bondholders of a bank withdraw their funds at the same time out of panic, the bank can have serious difficulty in meeting these obligations and may even fail due to lack of sufficient funds. Panic at the level of one bank may spread to other banks, causing a run on bank deposits and a systemic collapse of the banking system as it happened in the U.S. in early nineteen-thirties. In many instances, banking panics may be irrational. But, once a run spreads over the entire banking system, there may be serious repercussions of credit squeeze and depression in the economy. To contain the irrational fear, the U.S. Congress instituted a system of providing government guarantees for bank deposits. While a government guarantee of bank deposits circumvents irrational banking panics and runs, it can engender moral hazard in the banking industry. Once insured, depositors simply relax and stop monitoring banks, as the government stands by to pay them off should their bank fail. Moral hazard means that banks can take government guaranteed deposit funds to invest in highly risky bets. Although bank shareholders can lose their equity if such bets do not turn favorable, they can leverage to benefit enormously when the bets turn out successful. For example, suppose that a bank has $10 in equity funds, $90 in demand deposits and no other stakes. Then the leverage (equity-to-debt ratio) is 1:9. This is a relatively high degree of leverage compared to typical non-banking firms with leverage ratios of about 1:1. Banks generally operate with high degrees of leverage. If the entire $100 is invested in loans earning 6% rate of interest annually and depositors are paid 2% rate of interest annually, the bank makes $6 from loans and pays $1.8 to depositors per year, earning a net $4.8 from operations, which is 48% rate of return on equity of $10. Leverage thus magnifies the profits of bank shareholders. This tempts bank managers who normally act in the best interest of shareholders to take risk. If the bank’s borrower defaults and pays only a part of 6%, shareholders may lose a little of their capital. The most that shareholders will lose is $10, which is relatively small when compared to the loss of $90 to taxpayers due to government guarantee of bank deposits. The deposit guarantee solves the problem of banking panics, but creates the new problem of moral hazard by which the government (and hence taxpayers) remains liable for unfavorable bank bets. The U.S. has incurred hundreds of billions of dollars of losses during late nineteen-eighties in rescuing many savings and loans associations called thrift banks. To recover such losses in the future and to prevent moral hazard, the U.S. has instituted a system of risk-based deposit insurance and minimum bank capital standards. Banks are required to pay a certain percentage of their outstanding insured deposits as a price for the deposit guarantee and this price varies with the level of risk of a bank. The greater the risk of a bank’s assets, the larger becomes the deposit insurance premium rate. Every bank has to maintain a minimum level of capital as a percentage of assets under the scheme in the U.S. Banks failing to meet the minimum capital standards are not allowed to remain in operation. The insurance premiums are deposited in a government managed deposit insurance fund, which is required by law to have at least 1.25% of total bank deposits. Bank insurance premium rates are adjusted to maintain this level funding of the deposit insurance fund. The above elaborate system of regulation of the most dominant sector of a capitalistic economy is a vivid illustration of the fact that free credit markets have collapsed in the past and that governments must intervene to stabilize such markets. Observe, however, that the U.S. government has strived for instituting only those regulatory policies (optimal capital and deposit insurance premium standards) that are consistent with competitive rational capital markets. In principle, the government’s involvement simply ensures that bank depositors do not resort to irrational panics. In a hypothetical scenario of only rational behavior, markets for an ideally deregulated baking industry will have imposed on banks some debt covenants and prices for risky debt. Debt covenants may take the form of restricting a business to maintain a minimum net-worth (assets minus liabilities) or equivalently a maximum leverage ratio. A business violating such debt covenants may be taken to a bankruptcy court under corporate laws. Imposing a certain price for risky debt means that bondholders demand businesses taking high risk to pay a consistently high rate of interest.
Impact of a U.S. Living Wage on Food Prices, Obesity, and Healthcare Costs: A Discussion
Dr. Michael F. Williams, University of St. Thomas, Houston, Texas
A national U.S. living wage may reduce aggregate healthcare expenditures in the United States. This connection hinges on the following four propositions: 1. A higher minimum wage will cause higher food prices. 2. Higher food prices will cause reduced food consumption. 3. Reduced food consumption will cause reduced obesity. 4. Reduced obesity will cause reduced healthcare costs. We summarize research by others that supports each of these four propositions. We then provide a preliminary estimate of the reduction in healthcare costs resulting from a living wage of $8.90—a reduction in healthcare costs totaling $1.82 billion. We suggest an avenue of research that may provide a more accurate measure of the size of this reduction. Proponents of a living wage in the United States argue that the minimum wage should be raised sufficiently high so that a worker can support herself and her family at a living standard that exceeds the poverty level.(1) Analysis of the effects of such a living wage (and of a higher minimum wage in general) often centers on its effects on employment (and unemployment); an overlooked possibility is that a living wage may reduce aggregate healthcare expenditures. This possibility arises as a result of these four related propositions: 1. A higher minimum wage will cause higher food prices. 2. Higher food prices will cause reduced food consumption. 3. Reduced food consumption will cause reduced obesity. 4. Reduced obesity will cause reduced healthcare costs. Research by others supports each of these four propositions. This paper summarizes this research, then combines its results to provide a preliminary estimate of the reduction in healthcare costs resulting from a living wage of $8.90 (a level high enough for a single worker to support herself and three family members at a level of income above the official U.S. poverty level), and suggests an avenue of research—a computational general equilibrium model—that may provide a more accurate measure of the size of this reduction. U.S. Department of Agriculture (USDA) economist Karen Hamrick (1999) estimated the impact of the minimum wage on employment in the U.S. “food system.” The food system includes the following production sectors of the economy—manufacturing of food and kindred products, eating and drinking places, wholesale food trade, and retail food trade. According to Hamrick’s estimates, 11% of all food system workers earned at or below the $5.15 minimum wage, and the median hourly wage of food system workers was $7.09 in 1997. These figures show that more than half of all food system workers earned less than a living wage in 1997—earned insufficient income to support a family of four on one salary. Clearly, then, a living wage will have a large impact on production costs in the food system and on product prices. Two other USDA economists, Lee and O’Roark (1999) use an input-output model to garner a numerical relationship between a higher minimum wage and higher food prices. They estimate the following(2): A twelve percent increase in the minimum wage will cause the average price of food eaten at home to rise by 0.40 percent. A twelve percent increase in the minimum wage will cause the average price of food eaten away from home to rise by 1.1 percent. Let us take an average of these two estimates. According to Trivers and McBride (1996), Americans obtain approximately half of their calories from food eaten at home and half from food eaten away from home. So on average: (3) A twelve percent increase in the minimum wage will cause the average price of food eaten at or away from how to rise by 0.75% (three-quarters of one percent) Now let us interpolate this result to estimate how much each single percentage point increase in the minimum wage will affect the average price of food eaten at or away from home(4) Each one percentage point increase in the minimum wage will cause the average price of food eaten at or away from how to rise by 0.0625%. Seminal work in this area was done by Tobin (1950), who estimated a demand equation for food in the United States. From this demand equation one can calculate the price elasticity of demand for food—an estimate of the percentage change in food consumption resulting from each percentage point change in food prices. Tobin’s work has been discussed by many other researchers (not only for its interesting results but also for its econometric techniques), including Izan (1980) and Song, Liu, and Romilly (1997). Recently, USDA economists Huang and Lin (1999) published their estimates of price elasticities of demand for a wide range of food products, based upon 1987-1988 National Food Consumption Survey Data. Huang and Lin include estimates of the price elasticities of demand for food of “low” income households—households earning less than 130 percent of poverty-level wages. These estimates are relevant to our study of the living wage, since most full time workers who earn less than the minimum wage fall in the low income category. Huang and Lin estimated the following(5):
A National Survey of Student Investment Clubs in Taiwan: The Use of Appreciative Inquiry Approach
Dr. Bryan H. Chen, National Changhua University of Education, Taiwan
A student investment club enables student members to gain real-world experience managing their own money in the securities market. There are challenges that a successful student investment club has to face and solve, including limited capital, continual turnover of membership, inactive members, and consistency of record keeping, reporting requirements, potentially high transactions costs, and liability concerns. The purpose of this study is to investigate the factors that affect the club's operation via the use of Appreciative Inquiry (AI) approach. The findings reported in this study will help guide student investment clubs in developing sound strategies to run a student investment club. David Cooperrider and Suresh Srivastva, the theory creators of Appreciative Inquiry (AI), suggest that AI is a form of organizational study that selectively seeks to highlight the life-giving forces of an organization’s existence. For example, what makes organizing possible and what are the possibilities of new effective methods of organizing? There are four basic principles that guide AI: exploring the life-giving forces of the organizations should be appreciative, applicable, provocative, and collaborative (Rainey, 1996). This paper describes how the researcher using AI to investigate students in Taiwan at the particular eight universities are forming investment club to get a real-world experience to give students learning skills that will stay with them for a lifetime. Appreciative Inquiry (AI), a theory of action research and organization development practice, uses an imaginative approach for organizational study and learning to generate a collective positive image of a new and better future by exploring the best of what is and has been within an organization. AI is a problem-focused model that uses a four “D” cycle of Discovery, Dream, Design, and Destiny (Cooperrider, 1990). In the Discovery phase, AI helps the organization mobilize a whole system inquiry into the positive change core. In the Dream phase, AI helps the organization create a clear results-oriented vision to discover potential. In the Design phase, AI helps the organization create possible propositions of the ideal organization and helps members amplify the positive core. In the Destiny phase, AI helps the organization create processes for learning, adjustment, and improvement (Barrett, 1998). Thus, the purpose of this study is to investigate the factors that affect the club's return on investment via the use of Appreciative Inquiry (AI) approach. A good university investment club usually has access to finance faculty advisors and the numerous information resources available at university libraries. As members, students gain practical experience by managing their own money in the securities market. University investment clubs face many challenges. Limited capital is always the first concern for a university investment club because student members may not have enough funds to contribute to the club. Second, continual turnover of membership affects the possibility of earning a high return. Third, inactive members create a problem because there may not be enough club members present at a meeting to have a quorum. Fourth, record keeping and reporting requirements must be consistent for reporting and tax purposes. Fifth, low transactions costs are a priority because investment clubs want to make their own stock selection decisions based on their own research, rather than relying on the advice of full-service brokers. Finally, liability concerns must be addressed by the partnership agreement (Cox & Goff, 1996). In order to support members for continuing education and training, investment clubs could benefit from workshops and consultations provided by learning centers such as The Investment Club Learning Center Inc. (ICLC). ICLC president Paul Barnett suggests that education and training are the keys to successful investment clubs whether the clubs are new, old, struggling or well established (Sykes & Cintron, 2000). The newest stock study courses for investment clubs, designed by the Investment Education Institute, incorporate all of the NAIC principles. Investment clubs use these stock study courses for improving investment decisions. The courses cover topics such as successful stock selection, comparisons based on industry characteristics, and successful portfolio management. The purpose of this study is to investigate the factors that affect the club's operation via the use of Appreciative Inquiry (AI) approach. The following research question will guide the study: To what extent does the student investment club support continuing education of members? This study used member interviews to gather data in an attempt to present a more complete picture of the factors that affect student investment club members’ perceptions regarding club’s operation. The population for this study consisted of 128 student members of eight universities. In fact, only eight universities have offered student investment club in Taiwan. The questionnaire instrument was made up of two parts. Part I Section I requested student’s demographic data regarding the background of the club students and used a checklist response format. This section included years of college level, years of experience in the investment club, age, major field of study, and gender. Section II was comprised of 10 specific workshops that perceived by club students to indicate a level of their reaction for continued education necessity. (See Appendix A). The questionnaire was written with the help and inspiration from an article by Chen & Lavin (2001). The modified questionnaire was developed based upon the experiential background of the researcher working as the finance professor with Department of Business Education at the National Changhua University of Education, Taiwan. The 128 student club members were asked to indicate their college level within five categories. The highest percentage (34.4%) of those responding reported that they are juniors. Only 3.1% of those responding reported that they are graduate students. The data regarding years of college level for student members are displayed in Table 1.
Globalization, the Knowledge Economy, and Competitiveness: A Business Intelligence Framework for the Development SMES
Dr. Louis Raymond, Université du Québec à Trois-Rivières, Trois-Rivières, Qc, Canada
Globalization, the internationalization of markets, the knowledge economy, and e-business are among the interrelated phenomena whose emergence poses new challenges for the survival and adaptation of small and medium-sized enterprises (SMEs). For the various public and private organizations such as government agencies, universities and consulting firms that constitute the development infrastructure for these enterprises, the issue that thus arises is the elaboration of policies, programs, and services that respond to the new competitive exigencies of SMEs in the changing business environment. Hence, it is essential to detect trends and understand strategic issues that stem from a global knowledge economy, that is, through business intelligence activities by which the economic, technological, and social environments of SMEs are scanned. This article proposes a conceptual and operational framework to this end. For analytical and operational purposes, these trends are grouped under four business intelligence themes as follows: a) the transformed management of the value chain and new organizational forms, b) information technologies, information systems, and e-business as sources of added value and vectors of competitiveness, c) the opportunity to develop new markets in a context of internationalization, and d) the development of human and intellectual capital, diffuse innovation, and organizational learning. Globalization, the internationalization of markets, the liberalization of trade, deregulation, the knowledge economy, e-business, and new forms of organization (network enterprises, virtual enterprises), all of these interrelated phenomena pose new challenges to small and medium-sized enterprises (SMEs). Most often less endowed in human, financial, and technological resources than large enterprises, SMEs nonetheless have advantages in terms of flexibility, reaction time, and innovation capacity that make them central actors in the new economy (Julien et al., 1996). This profound transformation of the business environment must also be apprehended by the public and private organizations that constitute the «development infrastructure» of SMEs (research and transfer centers, information brokers, government agencies, consultants, and service firms) and assist them with policies, programs, methods, tools, products, and services. It is essential, however, for these organizations to be able to detect the trends and understand the strategic issues that stem from a global knowledge economy, that is, through scanning the competitive, commercial, technological, political, legal and social environment of the SME (Dedijer, 1999; Hassid, Jacques-Gustave and Moinet, 1997; Raymond, Julien and Ramangalahy, 2001). Now under the name of business intelligence, these environmental scanning activities constitute a fundamental mode of organizational learning to the extent that the small firm’s adaptation and competitiveness depend on its knowledge and interpretation of the changes that occur in its environment (Beal, 2000; Choo, 1998; Raymond and Lesca, 1995) Given the economic development and business support mission of various regional, national, and international actors and organizations, this article proposes a conceptual and operational framework for the elaboration of business intelligence activities in answer to the following question. What are the present and potential impacts of the emerging global knowledge economy upon the competitiveness of SMEs ? More specifically, these business intelligence activities should give insight on: the important trends and ruptures in the new economic and competitive environment of SMEs; the factors of development, the key competencies, skills and business practices required or imposed by this new environment; the strategic issues of the SMEs’ adaptation to the new demands or constraints of their environment in the coming years; the strategic issues of the elaboration and implementation of a development infrastructure for SMEs by the various stakeholders, in terms of development targets and practices. The general aims of business intelligence shall be understood in the light of the following preoccupations : an emphasis on the conditions of transition (adaptation/mastery) to an information and knowledge economy for manufacturing and technological SMEs, including competitiveness factors, key organizational and technological competencies, and best practices in particular. With a view to diffuse or transfer to the larger number of SMEs, this preoccupation should not be restrained to « high-tech » sectors (e.g., biotechnology) neither to firms that are already innovative; an emphasis on the qualification of SMEs and their support networks, the units of analysis will be foremost the firms themselves and their networks (as stakeholders or economic agents), rather than economic structures such as industries or sectors; an emphasis on the prospective issues in the medium-term aiming to orient collective actions of economic development, and more specifically to provide input to decision-makers who must design relevant support policies and action programs. In order to do this, the stakeholders in the development of SMEs must be involved, particularly including the specialized intermediaries whose task it is to translate the strategic issues identified into concrete targets for action; a process that is synthetic, designed to quickly arrive at an overall view of the business intelligence themes or modules agreed upon, and using a triangulation strategy (research outputs, institutional outputs, realities of the SMEs and their support networks). Specifying the preceding question, the framework proposed in this article relates the following four sub-questions for analysis, that is, for each of the business intelligence themes and for the resulting synthesis: What are the trends, the factors of development, and the best practices most susceptible to increase the competitiveness of SMEs in the medium-term ?
A Field Research on the Effects of MIS on Organizational Restructuring
Dr. Cemal Zehir, Gebze Institute of Technology, Istanbul, Turkey
Dr. Halit Keskin, Gebze Institute of Technology, Istanbul, Turkey
In this paper, the changes that resulted from the adaptation of the developments in information technologies in reconstructing companies have been tried to be researched. In this context, the relations between the enterprises’ goal to adapt their organisation structure, the activities to make the use of MIS widespread during this process, the influences of MIS on in-company activity and processes and the objects of improving the financial performance and competitive ability by using MIS which has been developed by using information technologies have been researched. This paper also includes a field research carried on the first 250 manufacturing companies. As a result of this research findings that may contribute to the theoretic knowledge’s have been found. The emergence of the restructuring concept in the organizations can be understood by examining the history of organizational development. Restructuring actions focused on the ideal bases of the classical administrative trend towards specialization and departmentalization couldn’t answer expectations of the customers in the transformation process from industrial society to knowledge society. The concepts of quality, low costs, and continuous improvement, answering customer expectations in a shorter time have become the most important concepts in the information society period. Organizations structured in classical ways couldn’t adapt to customer expectations because of the communication problems between departments. During the information society period, organizations have begun to accept restructuring as a compulsory preference to survive with a customer–focused restructuring plan. Also, during that restructuring proccess, firms benefited from the advanges of Management Information Systems (MIS). In this paper, the relation between restructuring strategy and MIS is studied. Restructuring is reshaping of the organizational units. Through this application all of the departments or organizational units can be combined or separated into new units. Restructuring is the activity of reducing or redesigning the number and size of the organizational units or the number of hierarchical levels (Keidel, 1994, p.12). Organizational restructuring aims to reduce work force by redesigning the work itself. This strategy uses the methods of eliminating and centralizing the departments, product, functions, hierarchical levels, and of reducing the working hours. Also business managers make job designs, departmental arrangements, define the new controlling intervals and increase empowerment actions (Cameron, 1999, pp.93-114). During the restructuring process, organizational functions and units, like production, distribution, engineering, information systems, could be combined. Delayiring, reducing the number of hierarchical levels or positions, is another way of restructuring. Delayiring is carried out, in organizations, to shorten the hierarchical distance between top level manager and ordinary employee (Keidel, 1994, p.13-15). Restructuring is a medium-term strategy (Wager, 1998, p.301-302). In the organizational restructuring process, besides survival and development, organizations aim to increase production, efficiency, competition capacity and quality for the higher organizational performance (Feldheimn and Liou, 1999, p.57). The strategic goals of organizational restructuring are cost reduction, time saving, empowerment, acceleration of learning and improving the quality of output and work life (Pruijt, 1998, p.264). Besides, taking over a firm or mergers, quick action for preventing the bankruptcy and arrange businesses for privatization can be considered as strategic purposes (Labib et al, 1994, p.61-62). Today organizational restructuring is one of the most popular business applications both in North America, Europe and Asia (Weber, 1999, p.150). Basic techniques used by organizations in restructuring processes are simple arithmetic ratio calculations. These techniques include ratios related to sales volume, number of subordinates a manager can control, and number of employees working in production department. Benchmarking has an important role in corporational restructuring. Through benchmarking, organizations can compare their productivity with their rivals’ productivity and find out the differences. Data gathered by this method can guide work reduction at high quantities. Cost reduction is the basic subject of restructuring (Keidel, 1994, p.15). In order to survive and be successful in the global competitive environment, there is no way other than making rational and efficient decisions. Focusing on efficiency and competitiveness involved by organizational restructuring strategy fit into economic model (Feldheimn and Liou, 1999, s.57). Restructuring assume that high technology is permanently improving information system, communication, new production process and expert systems. In order to increase the competitive power, restructuring become an alternative of the paradigms had been existed before 1990’s (Drew, 1994, s.8). Also, “changing” is a way of life and restructuring is a necessity. Organizations are running parallel with increasing global competition so they are frequently restructuring themselves. Nowadays, restructuring process becomes prevalent because of intensive global competition and gradual economic growth in western-developed countries (Eren, 2000, p.261). Organizations tend to take into consideration personnel development to get competitive advantage (Keidel, 1994, p.15-16). Employee empowerment actions are important in restructuring process (Hammer and Champy, 1993, p.51-62). In a research examining restructuring actions, effective usage of information technologies is considered as an important factor for organizations to get successful results. Also, nowadays many researches at this area indicate the importance of restructuring and increasing role of management information systems in that process (Weber, 1999, p.150; Bishop, 1995, p.25-33). Information technology (IT) was used in monitoring the employees’ data in 1960’s so that it entered into the business activity in those years (Martinson, 1997, p.35). Then, in 1980’s, subjects related to management rather than technologies have been examined in information systems (IS) studies (Lai, 2001, p. 263).
A Comparison of Leading Central Banks and Their Effectiveness Using the Discount Rate as a Monetary Policy Tool
Dr. Samuel B. Bulmash, University of South Florida, Tampa, FL
The actions or inactions of a central bank can have profound effects on the citizens of a given country that has a central bank, and impact their standard of living as measured by changes in gross domestic product, unemployment, and inflation. How do leading central banks of the world compare in their organization, powers, and goals; and in their ability to affect macroeconomic indicators? This study examines the impact of Discount Rate Policy by the central banks of three leading industrialized nations, one from each of the world’s major economic regions: The United States’ Federal Reserve System, United Kingdom’s Bank of England, and Japan’s Bank of Japan. This paper begins by describing the basic organizational structure, monetary policy tools, supervisory and regulatory powers, payment system, and various services of each central bank. Table I below summarizes several key aspects of each bank, and Section I provides essential details. With this insight and background, we formulate our hypothesis, stated in Section II, asserting which one of the aforementioned central banks exhibits the most power and influence over its nation’s economy by using a key monetary tool, the discount rate. Next, in Sections III and IV, we test our hypothesis by statistically deriving correlations between each bank’s discount rate and various macroeconomic indicators of each economy, and comparing these correlations. Finally, in Sections V and VI, we summarize and discuss our findings, and draw important conclusions. Ultimately, it is the authors’ hope that this paper will shed light upon central bank control and consequent economic performance. Congress founded the Federal Reserve System in 1913 in response to the economic crises and financial panics that had plagued the nation for the prior 50 years. These financial panics were combinations of bank failures, bank runs, and stock market crashes (Miron, 1986). The Federal Reserve Act of 1913 established the central bank of the United States in order to provide the nation with an independent, adaptable, and more stable banking system. The Federal Reserve (the “Fed”) is considered independent in the sense that its decisions do not have to be ratified by the President or anyone else in the executive branch of government, however its actions are subject to the authority of Congress. The system’s objectives as defined by continued legislation and stated by the Fed include: “economic growth in line with the economy’s potential to expand; a high level of unemployment; stable prices; and moderate long-term interest rates.” The Federal Reserve System is composed of the Board of Governors, twelve regional Reserve Banks, and the Federal Open Market Committee. Serving as the core institution, the Board of Governors is made up of 7 members, all of whom are appointed by the president with the advice and consent of the Senate. The terms of office for Board members are 14 years and are staggered, so that one term expires on January 31 of each even-numbered year. The Board supervises and regulates the operations of the Federal Reserve Banks and their Branches and the activities of various banking organizations, exercises broad responsibility in the nation’s payment system, and administers most of the nation’s laws regarding consumer credit protection. It has sole authority over changes in the reserve requirements, and must approve any discount rate changes initiated by a Federal Reserve Bank. The twelve regional Reserve Banks and their twenty-five Branches are responsible for operating a nationwide payments system, distributing the nation’s currency and coin, supervising and regulating member banks and bank holding companies, and serving as a banker for the U.S. Treasury. Each Reserve Bank acts as a depository for the banks in its own District and is formally responsible to a 9-member board of directors. This board of directors is responsible for the administration of its bank and for the establishment of the discount rate charged to banks for borrowing from the Reserve Bank. The Federal Open Market Committee consists of the seven members of the Board of Governors, the president of the Federal Reserve Bank of New York, and four other Reserve bank presidents who serve one-year terms on a rotating basis. The FOMC is solely in charge of the buying and selling of securities by the Federal Reserve, otherwise known as open market operations. The Federal Reserve Act identifies the objectives of monetary policy. It specifies that monetary policy should effectively promote the goals of maximum employment, stable prices, and moderate long-term interest rates. The Federal Reserve conducts monetary policy using three major tools including open market operations, reserve requirements, and the discount window. Using these tools, the Federal Reserve adjusts the supply of reserves in relation to the demand for reserves. This affects the volume of money and credit, and hence interest rates. In this way, it influences inflation, output, and employment. Monetary policy actions affect prices by first affecting the real rate of interest and hence aggregate demand, and then eventually affect the rate of unemployment (Anderson, 1984). As briefly mentioned before, open market operations involve the buying and selling of U.S. government securities in the open market in order to influence the level of reserves in the depository system. The Fed’s open market operations are the most flexible and most frequently used instrument of controlling the money supply. If the money supply is growing too slowly, the bank may purchase Treasury securities, thus injecting cash into the financial system and expanding its monetary base. This enables banks to create additional deposits, which constitute a major portion of the money supply. Similar results can be achieved by changing the required reserve ratio, although such actions are much less common. These reserve requirements specify the amount of funds that depository institution must hold in reserves against deposits. When the required reserve ratio is raised, banks are unable to create as much money as they previously were able to because a larger portion of their assets must be held in reserves. Lastly, the Federal Reserve can make changes in the discount rate, or the interest rate charged to depository institutions when they borrow reserves from a regional Federal Reserve Bank.
Partners, Resources, and Management Mechanisms of Inter-organizational Collaborative Ties in Non-profit Organizations
Dr. Tzu-Ju Peng, Providence University, Taichug, Taiwan
Postdoctoral Fellow, J. L. Kellogg School of Management, Northwestern University, IL
Relevant theories of cooperative strategy primarily apply to the private sector rather than to non-profit organizations (NPOs). The purpose of this study is to explore the relationships among different partners, resources inputs, and management mechanisms adopted by each inter-organizational collaborative tie. Derived from resource-based view, four questions were proposed and were further examined via both qualitative and quantitative methods. Empirical results demonstrate that: First, within the inter-organizational collaborative ties in NPOs, the resources provided by private business are not different from those provided by public institutions. As well, the resources available to private business are not different from those available to public institutions. Second, management mechanisms adopted by NPOs did not significantly vary with different partners but significantly vary with different resources that contributed to the inter-organizational cooperative relationship. The dramatic competitive environments usher in the time of collaborative strategies. Although the inter-organizational strategy has became a more important issue, few articles focus on the topics of management mechanisms adopted by these inter-organizational relationships. Furthermore, relevant theories and empirical researches in management were primarily applied to the private sector rather than to the non-for-profit sector. This was evidenced by Scandura and Williams who compared the research methodology in management in two period, demonstrating that the highest percentage reported for type of sample was for samples drawn from the private sector, in contrast, only 2.1% in 1980s and 0.9% in 1990s reported for type of sample in the settings of non-for-profit, governmental, and nongovernmental organizations (Scandura and Williams, 2000: 1257, 1259). In practice, nonprofit organizations have adapted to changing environments in various ways, from subcontracting to partnership to outright conversion to for-profit status (Ryan, 1999). They are being called on to serve more people, with better results, than they have in the past. Both business and social sector organizations are reinventing themselves by forming alliances. Sagawa and Segal (2000) mentioned that instead of framing the purpose of such partnerships as the pursuit of opportunities, improving their efficiency becomes the major reason for nonprofit organizations’ collaboration. Although cross-sector alliances are increasingly popular, both business and nonprofit organizations bring different expectations to these relationships. Thus, the cross-sector collaborations might exist frictions and conflicts on different expectations. Managing conflicts and expectations in partnerships becomes an important issue particularly as the proliferation of cross-sector cooperation. On both of the theoretical and practical aspects, this study tried to apply the management theories to interpret the phenomena in non-profit settings. Therefore, while applying the business management theories to the non-profit sector, this paper proposed several questions that remained to be answered. Drawing from resource-based view and conducting by dual methods of qualitative and quantitative methodologies, this study aimed at figuring out the relationships among partners, resources, and management mechanisms adopted by those inter-organizational collaborative relationships in non-profit organizations. Most definitions of cooperation focus on the process by which individuals, groups, and organizations come together, interact, and form psychological relationships for mutual gain or benefit (Smith, Carroll, and Ashford, 1995:10). Oliver (1990) defined that inter-organizational relationships (IORs) are the relatively enduring transactions, flows, and linkages that occur among or between an organization and one or more organizations in its environment. Sagawa and Segal (2000) regarded business and social sector partnership as the relationship between two organizations that engage in one or more exchanges. In short, interorganizational relationship is defined as voluntary cooperative agreements between at least two organizations which involve exchange, sharing, it can include contributions by partners of capital, technology, or firm-specific assets, and aims at achieving competitive advantage for the partner (e.g., Harrigan, 1985; Zajac and Olsen, 1993; Ring and Van de Ven, 1994; Gulati, 1995; 1999; Das and Teng, 2000). Several scholars have questioned the role of transaction costs and appropriation concerns in alliance. Rather than transaction cost perspective, Zajac and Olsen (1993) proposed a transactional value framework to examine intergrganizational strategies, highlighting the rationale of “joint value maximization” instead of “transaction cost minimization”. Modhok (1996) provided organizational capability (OC) approach, which is derived from resource-based view, as an alternative explanation to describe the nature of governance. The decision to conduct a control activity is based not only on cost minimizing considerations but also on value creating consideration that the firm has certain unique capabilities that enable it to create and realize value from doing so. Thus, a fundamental question for strategy researchers is the utility of the resource-based view in developing meaningful management tools in the form of actionable prescriptions for practitioners (Eccles and Nohria, 1992; Mosakowski, 1998, Priem and Butler, 2001). Das and Teng (2000:32) mentioned that one area that remains under-explored in the literature is the resource-based view of strategic alliances. Das and Teng (2000:33) emphasis the resource-based view can be particularly appropriate for examining strategic alliances because firms essentially use alliances to gain access to other firms’ valuable resources. The fundamental resources-based view (RBV) theoretical statement is that valuable and rare organizational resources can be a source of competitive advantage (Barney, 1991:107).
Agency Cost Control
Dr. Richard H. Fosberg, University of the Incarnate Word, San Antonio, TX
Dr. Sidney Rosenberg, University of North Florida, FL
In this study, we seek to ascertain whether certain mechanisms found by Ang, Cole, and Lin (2000) to control agency costs in small firms are also effective in controlling large firm agency costs. We also investigate whether another agency cost control mechanism not tested by Ang, Cole, and Lin, a dual leadership structure, is also effective in controlling agency costs. Our analysis indicates that a dual leadership structure, greater share ownership by the firm's CEO, and greater share ownership by blockholders are all effective in controlling a firm's agency costs. These mechanisms reduce the selling, general, and administrative expense to sales ratio of our sample firms and/or induce better asset utilization by the firm (as measured by the firm's sales to total assets ratio). There is some weak evidence that greater share ownership by other officers and directors may also be useful in controlling a firm's agency costs. For many years, agency theorists have recognized that the separation of ownership and control common in most corporations creates conflicts of interest between a firm's managers and shareholders. These conflicts (agency problems) arise because managers have the opportunity to use the resources of the firm in ways that benefit themselves personally but decrease the wealth of the firm's shareholders. Examples of this type of opportunistic behavior by managers include drawing excessive compensation, consuming an excessive amount of perks, shirking of their responsibilities, and investing in negative net present value (NPV) projects that offer personal diversification benefits to the firm's managers. A number of mechanisms have been suggested to control these agency problems, these include share ownership by managers, concentrated share ownership by other (nonmanagement) shareholders, use of a dual leadership structure by the firm, and managerial monitoring by outsiders (nonmanagers) on the board of directors. Several empirical studies have attempted to ascertain if these agency cost control mechanism are actually effective. The results of the tests have been mixed. Two of these studies used a firm's return on equity (ROE) to measure agency cost control effectiveness. These authors argue that the more effectively a firm's agency costs are controlled the higher the firm's ROE should become. Demsetz and Lehn (1985) test the effectiveness of share ownership as an agency cost by control mechanism by testing the relationship between share ownership by a firm's five (and twenty) largest shareholders and ROE. They find no relationship between any share ownership variable and ROE. Baysinger and Butler (1985) test the effectives of outsider director representation on the board of directors as an agency cost control mechanism. They find no relationship between outside director share ownership and contemporaneous ROE. In a more recent and in depth analysis, Ang, Cole, and Lin (2000) were able to document, using a sample of small businesses, that agency costs can be controlled by some mechanisms. Ang, Cole, and Lin (ACL) use as their agency cost measures the operating cost to sales ratio and the sales to total assets ratio. The operating cost to sales ratio is a direct measure of agency costs and rises as agency costs rise. Conversely, the sales to total assets ratio will fall as the firm's assets are less than optimally utilized. ACLs results indicate that agency costs fall as share ownership by the primary owner rises and when the firm's senior manager is also a shareholder in the firm. In this study we will expand on the work of ACL by using a sample of large firms in our analysis and by using an additional agency cost control mechanism that they did not consider, a dual leadership structure. We find that a number of agency cost control mechanisms are effective in controlling agency costs in large firms. Specifically, our analysis indicates agency costs are reduced if a firm has a dual leadership structure, has greater share ownership by its CEO, and has greater share ownership by blockholders. There is some weak evidence that share ownership by other officers and directors also can help control a firm's agency costs. The remainder of this study is organized as follows. Section I contains a discussion of the agency cost control mechanisms to be tested while Section II presents the sample selection procedures used in this study. The empirical results are contained in Section III and a summary of findings is presented in Section IV. Fama and Jensen (1983) observed that when the benefits of the specialization of management and unrestricted risk-sharing are large, it will be efficient for firms to adopt an ownership structure in which the residual claimants (shareholders) do not play a significant role in managing the firm. Instead, professional managers will be hired to manage the firm on behalf of the shareholders. If employment contracts cannot be costlessly written and enforced, this separation of ownership (residual risk-bearing) and control (decision making authority) may allow managers to engage in opportunistic behavior which will lower the value of the claims of the residual risk-bearers (shareholders). In this study, the ability of three mechanisms to control these agency costs are examined. These mechanisms are share ownership by blockholders, share ownership by officers and directors, and a dual leadership structure. Shareholders of a corporation have a residual claim on the earnings and assets of the firm and therefore bear, proportional to their share ownership, the economic consequences of actions taken by the firm’s managers and board of directors. If managers engage in opportunistic behavior, shareholders bear a pro rata share of the costs of such actions. Consequently, a shareholder’s incentive to monitor management and make certain the firm is being properly managed is directly related to the proportion of the firm’s shares that the shareholder owns. This implies that a particular type of shareholder, blockholders (those who own at least 5% of a firm's common stock), have a strong incentive to seek to control the opportunistic behavior of the firm's managers. Assuming this to be the case, greater blockholder share ownership in a firm should lead to greater monitoring of the firm by blockholders and result in fewer agency costs being incurred by the firm's shareholders. A similar incentive to reduce agency costs exists for officers and directors of the firm. If officers and directors engage in opportunistic behavior, they (like all shareholders) bear the costs of this behavior in direct proportion to their share ownership in the firm. Consequently, greater share ownership by officers and directors means they bear a greater proportion of the costs of the opportunistic behavior in which they engage. The bearing of these costs reduces the net benefit of any opportunistic behavior to the officers and directors. As a result, greater share ownership by officers and directors should reduce the amount of opportunistic behavior in which they engage and therefore, the agency costs incurred by the firm's shareholders.
Determinants of on-Board Retail Expenditures in the Cruise Industry
Dr. Elise Prosser, University of San Diego, San Diego, CA
Dr. Birgit Leisen, University of Wisconsin, Oshkosh, WI
The cruise vacation industry is projected to be almost $100 billion by 2005 as 9 million passengers go aboard each year. Faced with increased competition and shaved profit margins, cruise vacation marketers increasingly rely upon on-board tourist expenditures to provide supplemental revenue. Hence, identifying determinants of retail expenditures in on-board shops is critical. First, the authors propose hypotheses relating average daily on-board retail expenditures per person to vessel, voyage, and tourist characteristics: number of passengers, daily retail operating hours of the shops, type of ship, number of stores on-board, travel season, cruise destination, length of cruise, and average age of passengers. Next, regression analysis is employed to test the hypotheses and to determine the significance of each variable. Managerial implications and future research are discussed. There were almost nine million cruise vacationers in 2002, a 15.5% increase over 2001.There are more than twenty-five cruise lines worldwide employing a 97% average utilization rate. (CLIA 2003). Cruises are popular because they combine mode of travel with entertainment, gourmet cuisine, gambling, and sight seeing at ports-of-call all over the world. However, due to increasing competition among cruise marketers, greater price-sensitivity among consumers and the recent downturn in the economy, profit margins have steadily been shaved. Consequently, cruise marketers rely on on-board retail expenditures to provide supplemental revenue. A decade ago, only small gift shops with sundries and souvenirs were available on-board cruise ships. As retail establishments, they were below par and not attractive to tourists (Qu and Ping 1999). Therefore, tourists spent little money on-board, saving their dollars for shopping at the ports-of-call. However, since some tourists spend up to 50 percent of their total travel budget on shopping (Heung and Cheng 2000), cruise marketers are now repositioning themselves in an attempt to capture some of these expenditures. Recently, cruise marketers have begun to invest in on-board shopping facilities and dramatically increased the breadth and depth of merchandise available. Furthermore, they are becoming more aggressive in marketing the shops to meet the needs of specific age groups, nationalities, and genders. They now offer a range of merchandise from inexpensive souvenirs to high-quality brand name goods and collectibles. The goal of the shops is to maximize retail expenditures on-board and to relieve the tight profit margins. From the cruise marketers’ point of view, profits from retail expenditures on-board can go straight to the bottom line. Therefore, many cruise ships maintain several stores on-board to spur shopping among passengers, essentially a “captured audience” (Teye and Leclerc 1998), while sailing between ports-of-call. Therefore, managers are very interested in identifying determinants of on-board retail expenditures in order to increase profits. "Despite the large portion of tourists' expenditures that is spent on shopping, researchers have given limited attention to this segment of the industry" (Littrell, Baizerman, Kean, and Gahring 1994, p. 3). Notable exceptions include the work of Littrell, et al (1994) and Kim and Littrell (1999). These researchers investigate the relationship between souvenir buying and tourism styles, and the relationship between tourist attributes (e.g., world-mindedness) and souvenir purchase intentions. However, the gap in the literature with regard to identifying determinants of retail expenditures on cruise ships remains. What factors determine how much passengers spend on board? Do tourists spend more on long versus short cruises? How long should the shops be open? Do demographic variables such as the age of passengers matter? Do retail expenditures differ by time of year? Does the destination influence on-board retail expenditures? Managers of cruise marketers and their retail establishments desperately need answers to these questions. Only recently have cruise marketers and the retail establishments started collecting detailed data on on-board retail expenditures in an attempt to answer some of these questions. In this article, these questions are addressed by identifying potential determinants of on-board retail expenditures including vessel characteristics, voyage characteristics, and a tourist characteristic. The vessel characteristics investigated in this study are the number of passengers, ship type, number of shops on-board, and the operating hours of shops. The voyage characteristics investigated are the length of cruise, cruise destination, and travel season. The tourist characteristic investigated is the average age of passengers. The paper begins with an overview of the cruise industry. Next, hypotheses are proposed based on interviews with managers and related literature review from both academic and industry trade journals. Finally, regression analysis is employed to test the hypotheses and to identify the significant determinants of on-board retail expenditures. Bookings in the cruise industry have increased ten-fold over the past two decades (Greenwald 1998). Ironically, following the 1997 release of Titanic, cruise bookings grew dramatically, despite the fact that the luxury ship actually sank (Busby and Klug 2000). For the next several years, many cruise lines ran at full capacity (Prynn 1999) and expanded their operations. Nine new cruise ships were christened industry-wide in 1998, and twenty more were commissioned for future use (Harmon 1998). This $10 billion investment in expansion is expected to increase the number of beds by 50 percent by 2002 (Greenwald 1998).
Antidumping War Against China and the Effects of WTO Membership
Dr. Nadeem M. Firoz, Montclair State University, Upper Montclair, NJ
Dr. Ahmed S. Maghrabi, Arabia Consulting Group
The first dumping lawsuit against China came in 1979 when Europeans accused Chinese saccharin manufacturers of dumping. Since them, China has become a common target of dumping charges for most of its major trading partners. More than 422 cases have been brought up against Chinese enterprises involving more than $10 billion worth of products. This represents only a small portion of total exports of China, but the negative effect to some Chinese industry has been devastating. Chinese TV manufacturers know for experience how dumping charges can bring down one of the strongest industries in China. With a 44.6 percent antidumping tax on color TV from China, the European market has completely closed its door to Chinese TV manufacturers. One of the major reasons for these attacks is the non-market economy status that China has. Contrary to a market economy, in a non-market economy the state has a major influence in production cost and the final price of goods. The government provides some companies with subsidies and tax benefits that put foreign competitors in a disadvantage. Some countries like the United States claim that because of this situation China is able to sell its products at prices well below normal value. This is considered dumping and it is condemned by the World Trade Organization (WTO). On the other hand, Chinese companies claim that they are able to sell at prices below most of its competitors because of lower labor costs and not having to comply with environmental standards. As a result of all this negative publicity, China has decided to fight antidumping charges and, why not, strike with some of its own. In recent years, China has taken some major steps to ensure its prompt accession to the WTO. Once a member, antidumping procedures against China may have to change. The United States has decided to take away the status of non-market economy of China, but only after fifteen years of its entrance to the WTO. This will allow the U.S. to still use a surrogate country to calculate normal value and determine dumping margins for at least another fifteen years. Is China a victim of trade discrimination and protectionism? Are Chinese companies and authorities trying to portrait an image of China that does not reflect the true situation of its economy? How will China’s accession to the WTO help the country shake off antidumping charges? This article presents useful information that will help you answer these questions and arrive to your own conclusions regarding antidumping procedures against China. When talking about China and antidumping regulations is hard to picture Chinese authorities in the offensive end. But China has developed its own antidumping laws to protect domestic industries from dumping. Due to increasing globalization and in effort to join the WTO, China has removed some import quotas and reduced non-tariff and tariff barriers. This has led to an increase in the importation of products that have either lower prices or higher quality, sometimes leaving domestic industries at the mercy of foreign companies. The State Council promulgated China’s Antidumping and Anti-Subsidy Regulations on March 23, 1997. From that date and until March 1 2001, seven investigations had been conducted (Ross, 2000 Issue). As it is the case in the United States, antidumping regulations in China try to curb the effects of imports that substantially hurt, or threat to hurt, established industries, or impede the establishment of comparable domestic industries. The definition of dumping by Chinese authorities is the same seen in other countries. Imports at a price below their normal value are considered as being “dumped”. Normal value may be calculated by using either the price of the product in the exporting country, the price in a third country, or a constructed value. Once normal value is determined, Chinese authorities compare it to the price sold to China. If normal value is higher, an antidumping duty may be imposed in the product to compensate for the difference. In situations where there is no comparable price in the exporting country, reference is made to the price at which the exporter sells the identical or like product in a third country (Ross). Constructed value calculations may also be used to arrive to normal value. In this case, reasonable expenses and profit rates are added to production costs. When non-market economies are involved, Chinese antidumping regulations follow a different procedure than the one follow by the United States. According to current antidumping laws in the U.S., when dealing with a non-market economy the price of the subject product in a comparable market economy must be used. This has affected trading with China immensely since the United States considers China a non-market economy. Chinese antidumping regulations do not have this provision. Chinese authorities use the company’s price whether in a market economy or not. Antidumping regulations are carry out by two government agencies, the Ministry of Foreign Trade and Economic Cooperation (MOFTEC) and the State Economic and Trade Commission (SETC). MOFTEC is in charge of antidumping investigations.
Computer Access and Utilization Patterns Among Older People
Dr. Mustafa Kamal, Central Missouri State University, Warrensburg, Missouri
Dr. Godavari D. Patil, University of North Texas, Denton, Texas
The purpose of this study is to ascertain whether computer technology has significant influence on older people’s adoption of computers in day-to-day life. A total of five hypothesis and five sub-hypothesis are tested. We find that that persons with higher educational level generally owned computers at a higher proportion than did those with less education, that there is no relationship between the educational level of the respondents and the means used for learning computer skills, that there is direct correlation between income and purchase of computers, that there is not much difference among the respondents regarding their responses about making a career in computer field, and that as the age of the respondents increase, the use of computers decreases. We also found that most of the respondents had access to computers and used computers and the Internet profusely, especially as a means of communication to keep in touch with family and friends and to gain access to information on various aspects of life. There is a great need for older people to become computer literate in order to lead an independent life. Various organizations and companies are making efforts to encourage older people to acquire computer skills in order to keep them as active members of society. As a result, increasing numbers of older people are not only buying computers but also learning how to use them in order to satisfy their varied needs. Computer technology has become one more components in a senior's survival skills. For the past several decades, the older population in this country has been increasing significantly. Improved medical technology and standards of living have contributed largely to this end. It is projected that in the next 35 years the number of people over age 85 will triple (Palmore, 2000). In recent years, growing numbers of senior citizens in the United States have been purchasing computers and using the Internet. Computer information systems not only provide opportunities for communication that can help older people to avoid social isolation but also prepare them for their new careers. Computer-based interventions are improving psychosocial well-being among older people (Heidi & Eleanor, 1999). Although the older population has been slow in accepting new technologies, technology is helping them to develop positive attitudes and increase life satisfaction (Groves, 1990). In this paper we examine the nature of uses of computers by older people, the sources for acquiring computer skills and reasons for using computers and forming certain attitudes toward information technology. The purpose of this study is to ascertain whether computer technology has significant influence on older people’s adoption of computers in day-to-day life and whether older people are successful in making use of information technology as a means of communication in meeting their needs. The following are the objectives of the present study. To examine computer access and ownership among older people. To determine computer learning of older people. To study computer and Internet utilization patterns among older people. To understand the psychological and economic benefits of computers and the Internet. To study the attitudes of older people toward computer learning and their perception of anyone who teaches them computers and employers who come forward to employ older people. To obtain their recommendations for improving computer information services. Recommendations will be made for further research, which would facilitate systematic exploration of this important field of study. As Internet use is rapidly growing, so also the number of users aged 60 or older. The U.S. Department of Labor defines a worker as “older" at the age of 40. At 60, people are eligible to participate in most senior centers. Under Social Security, widows are eligible for survivor benefits at 60. At age 62, people become eligible to live in housing for elders. Age 65 is the minimum age at which retirees can get maximum Social Security pension. Age 65 is the dominant legal definition of when a person becomes “older.” Many senior citizens are both time and money rich. This is the group that is especially adopting both computers and the Internet. Low-priced PCs, along with the Internet, new applications, and the promise of providing social support networks, attract these seniors. Research studies show that those seniors with sufficient discretionary income are purchasing PCs and software to achieve computer literacy (Haas, 1996). Even Between early 1995 and mid 1998, there was an increase in senior computer ownership from 2% to 40% (Portland State University, 1999). A national survey conducted in 1995 found that 29% of adults aged 55 and older claimed to own a personal computer, while 23% of seniors 75 and older owned personal computers. It also revealed that education is a larger hindrance than age to older adults embracing this technology. More than 50% of seniors who owned PCs were college graduates compared with only 7% of those with less than a high school education (Alder, 1996). Perry (2000) mentioned that 94% of older adults say that e-mail is their main on-line activity.
The Impact of Globalization on Cultural Industries in United Arab Emirates
Dr. Mohammad Naim Chaker, University of Sharjah, Sharjah, United Arab Emirates
Cultural industries have come to be included in a distinct sector where the creation, production and marketing of goods and services are combined. Cultural industries include media organizations, film production, audiovisual sphere, the print output, multimedia sector, architecture, performing arts, plastic arts, and cultural tourism (UNESCO, 2000). Cultural industries produce consumer goods that convey lifestyles and values with both an informative and entertainment function, and cultural services that cover intangible activities such as the promotion of the performing arts, films, and values. According to the UNESCO report (2000), between 1980 and 1998, international trade in cultural goods increased five-fold. Interestingly, in 1990, 55 percent of world’s exports of cultural goods involved just 4 industrial countries (Japan, USA, Germany and the UK). In the same year, western countries such as France, UK, Germany and the US accounted for 47 percent of global import-flows of cultural goods. In recent years, China has emerged as a major player in the international trade arena, exporting a very large number of cultural products. In 1996, cultural products occupied top spot in the hierarchy of US exports. In fact, the US earned US$60.2 billion from the exports of cultural products leaving behind such traditional export sectors as automobiles, agriculture, aerospace and defense. Conspicuously, the copyright-based cultural industries in the US put up growth rate three times as fast as the annual growth rate of the GDP (IIPA, 1998). These numbers clearly confirm the widespread belief that cultural industries, particularly in the west have been spreading their wings far and wide. The UAE has rich cultural heritage influenced by Islam and Arab traditions. The Emirate of Sharjah, in particular, has been recognized by UNESCO as a fascinating emirate that has taken important steps to protect the rich Arabian cultural heritage. In fact, all other emirates in the UAE have taken steps to protect all aspects of the Arabian culture in the emerging scenario of globalization. For instance, the Dubai Shopping Festival which seeks to attract tourists from various parts of the world is anchored in the local traditions and cultural values. .Being an open economy, the UAE has witnessed the imports of a wide range of cultural products and services in recent years. These products and services have certainly affected the lifestyles of people in the country. Although there is virtual unanimity among economists that gains stemming from globalization of business are enormous for the nations of the world, concern about its perceived negative effects on national cultural industries has been expressed far and wide( Anderson, et al, 200; Dollar and Kraay, 2001; Sutherland, 2002 ; Wei and Wu 2001 and http://www.spc.int., among others). The Arab world, in particular, has witnessed interesting debates in the media about the impact of globalization on national cultural industries. Hence, it would be interesting, in this paper, to assess the impact of globalization on media organizations which play an important role in the cultural context of United Arab Emirates ( UAE ). Anthropologists, sociologists, psychologists and economists have documented the fact that people in different cultures, as well as people within a specific culture, hold divergent value systems on particular issues. Bass et al. (1979) Studied the attitudes and behaviors of corporate executives in twelve nations and found that the world is becoming more pluralistic and interdependent. Laurent (1983) found in his research some differences across national boundaries on the nature of managerial roles. Hofstede (1980) corroborated and elaborated on the results of Laurent’s and others’ research results in a forty-country study, which was later expanded to over sixty countries, in which 160,000 employees from American multinational corporations were surveyed twice. Hofstede, like Laurent, found highly significant differences in the behavior and attitudes of employees and managers from different countries, which worked within multinational corporations. The impact of globalization on culture has been a significant research area now for nearly three decades, with several examples of research being undertaken in different cultural contexts (Glenn and Glenn 1981, Hofstede 1980, 1991, Egan 1994, Kono 1994, and Pothukuchi et al 2002, among others). In the Arab world, it has been found that globalization of business has had an impact on business practices and management styles (Anwar and Chaker, 2003). Therefore, it would be analytically interesting to find out the interaction of global media organizations operating in the Dubai Media City with the cultural values of the country and the Arab region at large. Against the backdrop of the policies being pursued by the UAE to protect its culture in the era of globalization and the evidence of proliferation of a large number of cultural products and services, this study adopts a case study approach to assess the impact of globalization on media organizations in the UAE. This kind of approach is in line with the methodology used in contemporary literature (Abbott, 1988).
Restructuring Japan's Banking Sector to Avoid a Financial Crisis
Dr. Jennifer Foo, Stetson University, Deland, FL
Japan's banking sector is in a crisis despite the denial of the Japanese authorities. When banking problems are unresolved the financial system distorts the allocation of credit and undermines the intermediation functions leading to economic imbalances. Political paralysis and indecisions have characterized the Japanese policy makers in resolving the banking crisis. The results are huge government spending and unprecedented budget deficits while income growth continues to shrink as the economy deflates. This paper examines the causes of the decade-long Japanese banking crisis and the policy and restructuring efforts. Lastly, I suggest policy recommendations to avoid a full-blown financial crisis. Banking crises are widely viewed as particularly pernicious and damaging because of their possible contagion effect to other financial functions and markets as documented in numerous literatures (Garcia-Herrero 1997, Honohan 1997, Demirguc-Kunt and Detragiache, 1998, Peria 2002). Financial inefficiencies manifest themselves primarily in three harmful ways. First, financial instability discourages capital lending and investment leading to recession and deflation. Second, capital contraction denies healthy companies the opportunities to grow, adversely affecting the real sector and the economy. Third, savers are denied saving opportunities with close to zero rates of return, lowering consumer confidence and spending leading to recession. The Japanese banking crisis has threatened to develop into a national financial crisis that will have adverse implications not only for the Japanese economy but because of Japan's status as the world's second largest economy, also for the global economy as well (Sugisaki 1998, Kurlantzick 2001). Kaufman and Seelig (2002) identify five potential sources that increase the costs to the depositors, to the government, and to the economy in resolving a banking crisis: poor disclosure rules, regulatory forbearance, insufficient information and processing delay, bad market conditions after resolution, and inefficient receiver. Recognizing and therefore, resolving the banking crisis should be of the highest priority for the Japanese government to minimize costs and to avoid a full-blown financial crisis. The Japanese banking crisis, which began in the early 1990s, has several causes. First, and foremost, is the innate banking structure itself that was developed to protect the banks against competition leading to imprudent loan speculation and the massive bad loan problems. The structure of the Japanese banking sector is segmented into short-and long-term lending and financing, with significant government lending directives. There are seven distinct categories of financial intermediation based on the banks' specialization as shown in Figure 1. Second, the Japanese banks were unprepared to compete in the global financial markets when the Japanese financial markets globalized. Third, the Japanese interlocking "keiretsu” corporate relationship tends to foster interdependence between the banks and the corporations with no consideration for risk management in lending. Fourth, policy makers contribute to the financial inefficiencies with lax accounting and disclosure requirements. Noland (1996) identifies multiple origins of Korea’s banking crisis that can be applied to the Japanese banking system. First, a highly regulated financial market is exposed to greater risk because of the lack of portfolio diversification. For Japan, the innate segmentation of the banking sector into short-term and long-term lending and a highly regulated deposit base provided little diversification for the banks. Second, a banking crisis usually suffers from a post-lending boom in a stock and real estate bubble. The Japanese banks in their heydays of liquidity and speculative lending in real estate property and stock market overextended themselves. Third, government politicization in the lending decisions in the banking system is also a contributing factor. The Japanese government bureaucrats determine the most basic aspects of loan lending and pricing, such as deposit pricing and loan directives to favored industries. Fourth, banks face a liquidity mismatch in their portfolio assets, reflecting the lack of transparency and reporting adequacy. The inadequate disclosure and reporting by the Japanese banks made it difficult to evaluate the extent of the banks' bad loans, not only by the international community but also by the Japanese regulators themselves. Fifth, bank managers had incentives for moral hazard behavior given the deposit insurance guarantee and the explicit “no fail” and the “too-big-to-fail” policies. There has been no Japanese bank failure since the Second World War and the political bureaucrats are reluctant to admit that there is a banking crisis. Sixth, banks were inadequately prepared when financial liberalization exposed their operations and markets to competition. Japanese bank managers were ill equipped in technology and skills to compete in the global markets, especially financial managers that had operated in a highly protected environment. The explicit guarantee that banks are not allowed to fail(1), together with the deposit insurance, creates incentives for risk taking while depositors are lulled into false confidence. This policy was still in force as late as 1998 when the Hashimoto administration, in taking over from the Obuchi administration, promptly declared that none of the nineteen largest banks would be allowed to fail (Economist, 1998). Japanese depositors took little responsibility of risk monitoring of the banks and the policy engenders moral hazard behavior in the banks. Although the costs and severity of the crisis forced a change in this policy, the tendency is still to merge problem banks and community sharing of losses and debt waiver: the “Japanese practice.”(2)
Attitudes Toward the Management of International Assignments- A Comparative Study
Lan-Ying Huang, The Overseas Chinese Institute of Technology, Taichung, Taiwan
The era of hypercompetition (D’Aveni, 1995) has stimulated most international organizations to pay increased attention to cost reduction and cost effectiveness for employees transferring between countries. This article summarizes practices espoused by most researchers and commentators on the management of international assignments. Attitudes on practices of international assignments were tested through a survey of employees and executives in the international firms. The survey results reveal that, between two groups, there are no different attitude on pre-assignment training, continuing support, assignment for younger assignees, opportunities for career advancement, and a repatriation agreement, while only the duration of assignment shows significant difference. There is now a mature literature on the management of international assignments (IAs) within the broader field of the internationalization of organizations. Research in the management of international relocation within the field of internationalization of organizations has looked at: high expatriate failure rate (Tung, 1981; Black & Gregersen, 1991; Alder & Bartholomew, 1992; McDonald, 1993; Dowling, Schuler & Welch, 1994; Ralston, Terpstra, Cunniff & Gustafaon,1995); the significance of pre-assignment training (Dunbar & Katchen, 1990; Brewster & Pickard, 1994; Katz & Seifer, 1996); expatriate adjustment to a foreign country (Black, 1988, 1990; Black & Mendenhall, 1991; Black & Gregersen, 1991; Black, Mendenhall & Oddou, 1991); the impact of international relocation on dual-career couples and family members (Harvey, 1985, 1995, 1996, 1997a, 1997b; Punnett, Crocker & Stevens, 1992;Handler, 1995; Baliga & Baker, 1995); and the role the family plays in international relocations (Forster, 1992). As more and more corporations move from their domestic borders into the dynamic international arena, they are also encountering high expatriate failure rates and costs; therefore, the demand for individuals who can function succeed on international assignments continues to increase (Bjorkman and Schaap, 1994; Hodgetts and Luthans, 1993). Over the last twenty years, the increasing number of women and dual-career families make individuals are unwilling to break their career for a longer time or to give them up altogether (Handler & Lane, 1997; Harvey, 1995, 1996; Punnett et al, 1992; Geoffrey, 1999). It is also clear that expatriates leave their companies soon after returning home because there is a significant mismatch between expatriates’ expectations prior to their repatriation and what they actually encounter after they return home. As for the international companies, a growing number of companies are developing host country nationals (HCNs) and third country nationals (TCNs) to staff foreign subsidiaries or are bringing key staff of their foreign operations into corporate headquarters (inpatriates) to make transfers for career development and other purposes. International companies have been examining a number of alternatives to the use of expatriates to meet their global strategic needs, and the forecasted shortage of qualified global managers in the future (Gregersen, Morrison & Black, 1998). There have three objectives in this study. First, I highlight the problems, which companies and expatriates encountered. Second, I identify the policies and practices as voiced by commentators on various facets of expatriate management. Finally, this study attempts to examine the attitudes of companies’ executives and their current or potential expatriates toward the policies and practices of international assignments, and to explore the reasons for the differences. In addition, the relative importance of various issues from both companies’ executives and their expatriates are examined. Previous research has identified the advantages and disadvantages of expatriate management, but the relative importance of various issues in international assignments has not been clearly delineated. Global competition has increased the demand for companies with foreign expansion to utilize more rigorous and sophisticated policies in the areas of international assignments. Generally, there have three reasons for international posing: to fill positions, to develop managers with long-term potential international experience (Edstrom & Galbraith, 1977), and to control of local operations (Torbiorn, 1985; Brewster, 1991). However, the nature of international business operations involved the complexities of different countries and of different national categories of workers (Morgan, 1986). The major problem is often described as high expatriate failure rates. It is recognized that the financial costs of failure in the international business arena are considerably more severe than in domestic business (Dowling and Schuler 1991; Forster 1997). According to the number of recent studies, the U.S. for example, the failure rate of expatriates ranges from 16% to 40%, depending on the location of assignment (Black, 1988; Copeland & Griggs, 1985; McDonald, 1993; Ralston et al, 1995; Tung, 1981). The principal reason given for the failure problem is that most companies do not give sufficient attention to the selection, training, and monitoring of employees and their families on international assignments. Business failures in the international arena often may be linked to poor management of expatriates (Tung, 1984) and many companies underestimate the complex nature of human problems in international operations. Repatriation now has become a major problem for some returnees and their dependents. Many American companies do not make any explicit provision for the return of expatriates (Adler, 1981; Black, 1992). Statistics show that repatriated managers in US firms leave their companies at twice the rate of domestic managers without international experience, and that 20 percent of repatriate managers leave their companies within one year after returning from overseas assignments. Fifty percent of expatriate managers leave within the first three years (Frazee, 1997). In the global competitive market, international firms will not be able to effectively compete against major global competitors without world-class managers world-wide (Adler & Bartholomew, 1992; Harvey, 1997b). In addition, Scullion (1994) reported that 2/3 of companies in her study had experienced shortages of international managers, and over 70% indicated that future shortages were anticipated.
Use of Value Line Investment Survey in Teaching Investments
Dr. Bryan H. Chen, National Changhua University of Education, Taiwan
The Value Line Investment Survey is one of the most famous and widely used sources of financial database and investment information for teaching finance students in the United States. However, this database is pretty new to Taiwanese students. This paper describes the experience of using Value Line Investment Survey database in the teaching of investments in a large College of Technology and Vocational Education in Taiwan. The selected university is the only university that owns Value Line Investment Survey database in Taiwan so far. The purpose of this exploratory study was to assess the student perceptions to this innovative approach in teaching of investments at this university. The findings of this preliminary investigation seem to felt that Value Line Investment Survey database should be made an integral part of instruction in the finance curriculum in Taiwan. Much has been investigated about whether the use of computer-aid learning in financial education is enough for helping students’ learning in the United States (Kamath, Pianpaktr, Chatrath, & Chaudhry, 1996). For example, Clinebell and Clinebell (1995) surveyed 241 chairs of finance departments at American universities and found that computer-aid learning was widely available at The Association to Advance Collegiate Schools of Business (AACSB) member schools. However, the level of integration into the finance curriculum for teaching was inconsistent and low. One of the reasons was that it is not easy to persuade faculty to use technology in the classroom to enhance students’ learning (Maher, 2001). According to Marks (1998) & Gifford (1999), students in higher education would have better learning efficacy with computer-aid learning than without computer-aid learning. Greco & O’Connor (2000) investigated that the adoption of CAL was effective to achieve finance students’ learning outcomes. After all, using computer-aid learning is something that finance professors are all called upon to do in financial education for the following reasons. First, most finance faculty agreed that using computer-aid learning in their teaching would be a great teaching method for preparing students’ successful careers in the financial service industry (Saunders, 2001). Second, most financial executives agreed that finance students should have more computer-oriented skills in response to the needs of the finance community (McWilliams & Pantalone, 1994). Thus, during the past couple of semesters, the computer-based analyses of financial data were made part of a required investments course offered in the College of Technology and Vocational Education at National Changhua University of Education, Taiwan. The purpose of this exploratory study was to assess the student perceptions to this innovative approach in teaching of investments at this university. The purpose of this exploratory study was to assess the student perception to this innovative approach in teaching of investments at this university. The following research questions guided the study: To what extent does Value Line Investment Survey database usage have a far-reaching impact on student’s class work in this “Investments” course? To what extent does Value Line Investment Survey database usage facilitate understanding of the theory and practice of finance? To what extent does Value Line Investment Survey database applications benefit student to work in the financial industry in the future? To what extent does computer usage make student interested in learning more about the financial subject matter? To what extent does Value Line Investment Survey database applications are important for student to the analysis of financial statements? To what extent does computer usage motivate student to learn more about the field of finance? What perceptions do students have regarding too much stress placing on Value Line Investment Survey database usage in this “Investments” course? What perceptions do students have regarding sufficient instructions providing for the usage of Value Line Investment Survey database? What perceptions do students have regarding Use of Value Line Investment Survey database as a frightening and unpleasant experience? To what extent does Value Line Investment Survey database usage become an integral part of instruction in finance? In order to achieve the purpose of the study, the researcher has chosen to apply a quantitative as well as a qualitative method. The quantitative method refers to the survey the researchers would implement in the form of a questionnaire. Through the survey the researcher would strive to assess the student reaction to this innovative approach in teaching of investments at this university. A qualitative method is implemented through the researchers attempt to describe the functions and benefits of Value Line Investment Survey with the help of related literature. The subjects for this investigation consisted of students enrolled in two selected classes of investments course offered in the College of Technology and Vocational Education at National Changhua University of Education, Taiwan. The course “Investments” is good to introduce students to the stock market and some of the information sources such as Value Line Investment Survey that are available for analyzing investments. The uniqueness of this course provides the researcher a great opportunity to investigate whether the students could improve their learning via Value Line Investment Survey database. The questionnaire instrument was made up of two sections. Section A requested student’s specific factors regarding the background of the finance students and used a checklist response format. This section included class category, full-time work experience, age, gender, and GPA. Section B was comprised of 10 specific questions that requested to indicate a level of their reaction to Value Line Investment Survey. (see Appendix A). The questionnaire was written with the help and inspiration from an article by Dr. Darshan Sachdeva (2001). The modified questionnaire was developed based upon the experiential background of the researcher working as the finance professor with Department of Business Education at the National Changhua University of Education, Taiwan . A total of 79 questionnaires were distributed during the last week of instruction. Of returned questionnaires, all of them were usable. The 79 students were asked to indicate their class category classification within two categories. Almost half (48.1%) of those responding reported that they are evening students. The others are day students. The data regarding class category classification for students are displayed in Table 1.
The Challenges of Teaching Statistics in the Current Technology Environment
Dr. William S. Pan, University of New Haven, West Haven, CT
In today’s constantly changing world, as everything else, teaching and learning are also changing. The teaching methods, the technology even the student’s learning habits are changing. One can therefore predict that the most effective teaching method used to make the students learn better and that the technology which enables the content information easily accessible to student will certainly become more popular and widely adapted over time. In the academic world, computer technology is rapidly changing the ways Statistics courses are taught. Most statistics courses now have computer exercises. As professors and scholars, we have to face this new technology challenge. We have to manage this dynamic changing and evolving new environment. In our paper, the strengths of the Web-based education and the pitfalls of teaching an online statistical course will be discussed. Web-based online course teaching and learning means different things to different people. One can defines the Web-based education as a rational communications network that transfers the regular classroom activities into a cyber environment, in which the physical interaction between the instructor and student is not visible and required. One can also define the online course delivery as software related teaching and learning environment, which does preserve the custom-designed learning for an individual student. Some researcher also defines the online course as a complete learning and teaching program that mostly employs one or more media tools to deliver, through the use of Internet, the required course materials to students. Another researcher defines the online courses using a different phraseology. Taylor points out that the online learning is a different teaching and learning approach that focuses primarily on the advancement of student’s capabilities. This teaching and learning method utilizes the highly structured Internet tools and the World Wide Web. One of the strengths is that this teaching and learning method actively involves the learner in designing his/her own knowledge acquiring process. In this paper, we define the online course teaching and the Web-based education as a dynamic new way of learning and teaching. By which student’s learning is self-paced and self-motivated. The study time is anytime the learner wishes to study, which means to the student is, in fact, the virtual 24/7-cyber times. On the other hand, instructor does share control with the student on the pace of learning, and also pay close attention to the progress made by the individual student. The instructor provides student (the learner) in reality an individualized custom-designed education. Since the Web-based online course utilizes modern learning technique and technology, it thus reaches a much wider range of students. This teaching method can better satisfies student’s need and desire to learn. One can lists, from a learner’s point of view, many advantages of the Web-based learning: Convenience: The study time is anytime the student chooses to study. The study place is also up the student to decide. This means that it can be anywhere a student accesses, through the use of Internet and the World Wide Web, the course material. Feedback: When the HTML format and the email link build into the e-lecture online course material, the student’s feedback and reaction data can be quickly collected. This self-administered online testing can provide the student a very useful self-evaluation and self-learning mechanism. As a result of this continuous evaluation and outcome assessment process, plus the timely intervention by the instructor, the student’s understanding and learning of the content information will no doubt be much better? Learner control: In the Web-based online course, individual student has more to say on what and/or how he/she wishes to learn. Students also have chance to review the whole e-lecture over and over again. Besides, the Web-based online course often carries a cutting-edge image and prestige. Contact: The new broadband communication technology now also makes it feasible to have more, not less instructor-to-student contact and peer-to-peer contact. The most often heard complain and a potentially reoccurring problem is that the perception that the Web-based online course lacks the face-to-face encounter between student and his/her instructor. The adoption of a well designed teaching strategy and the advancement of the new communication technology have, for the practical purpose, minimize it if not reverse completely this misconception. Interactivity: An effective learning environments is an environment in which there is always a frequent and meaningful interaction among student, between student and instructional materials, and between student and the instructor. In Web-based online course, most often a mechanism was built into the online course structure so that the instructor can provide student a quick feedback on homework assignments and email questions. Students can also chats with their fellow students by using the “real time discussion (chat)” function. In short, a different kind of teacher vs. student and the student vs. student interactions were implemented. Thus the Web-based online course-learning environment enables student experience the engaged learning online. Accessibility: For those student who lives far from the educational centers and who do not have any appropriate time for campus education, the Web-based online course seems to be the viable alternative. In fact, with the Web-based Internet online course, students at any age and living anywhere do preserve the right of equal access to a lifelong learning possibility. Web-based education becomes a necessity for those students who are not mobile because of employment, child-care responsibilities, disability, and distance traveled from their residence. Even though we believe that every single student is a potential candidate ready for the Web-based online course. Even though almost anyone can enroll into an online course. The Web-based Internet online course is appropriate only for the mature students who are highly motivated and who has the time management skill, and who can also organize himself/herself well. The mature online student usually knows what he/she is looking for. He/she seeks more detailed information. These students would never restrict themselves only with the textbook knowledge provided by the instructor. They prefer to surf the net and explore their field of interest to the fullest extent online. Only this special breed of students who are more likely to succeed in this new, dynamic, and ever-changing learning environment.
Determinants of Municipal Bond Closed-End Fund Discounts
Dr. Ronald J. Woan, Indiana University of Pennsylvania, Indiana, PA
Dr. Germain Kline, Indiana University of Pennsylvania, Indiana, PA
The objective of this study is to investigate the potential determinants of cross-sectional variation of fund discounts/premiums for the two largest closed-end fund types: national and single-state municipal bond closed-end funds. Some researchers argued that municipal bond closed-end funds, due to their tax-exempt status and the resulting short-sale restrictions, should generally be selling at premiums. However, our current samples show that the overwhelming majority of municipal bond closed-end funds were consistently selling at discounts, with the average discounts highly significantly different from zero. However, the average discounts are lower than the average discounts for non-municipal closed-end funds reported in the literature. Our current research results also indicate that the potential determinants of fund discounts/premiums consist of both accounting data and market data. There is striking similarity of the behavior of fund discounts among equity, non-municipal and municipal bond funds. This similarity is at once surprising and encouraging, and could potentially facilitate future theoretical developments. Both closed-end funds (CEF) and the more popular open-end funds (OEF) are investment companies whose assets are a diversified portfolio of publicly traded stocks and other securities that they own and manage. When a CEF is organized, a fixed number of shares are issued at an initial public offering (IPO). Those shares are then traded in the secondary market. An OEF, by contrast, issues additional shares to each investor. Investors who desire to sell their open-end shares actually have their shares redeemed by the fund. While OEF shares are purchased and redeemed at their net asset values (NAVs), the price of a share of a CEF is set by the market and, as Dan Navarro (1999), the product manager of the Research Product Division of Wiesenberger, pointed out, “Due to circumstances which have yet to be fully understood by the financial community and academia, most closed-end funds trade at discounts after their IPO. This phenomenon persists in spite of the fact that the net asset values of both types of funds are readily determinable. This is the closed-end fund puzzle well-known in the finance literature. Brealey and Myers (2000) consider the puzzle as one of the “10 unsolved problems that seem ripe for productive research.” (p.1010). While this puzzle has been extensively investigated in the literature for equity and non-municipal bond CEF (e.g., Malkiel, 1977, 1995; Lee, Shleifer & Thaler, 1991; Pontiff, 1995, 1996; Woan, 2001a, 2002), this research represents the first formal attempt to study closed-end municipal bond funds. It was generally believed (Abraham, Elan & Marcus 1993) that bond funds should be selling at close to their NAVs since bonds represent fixed cash flows. Woan (2001a) provided highly statistically significant evidence for government and corporate bond funds that is contrary to this belief. Pontiff (1996) presented the comment that municipal bond funds, due to short sale restriction, should generally be selling at premiums. However, Pontiff offered no evidence to support his comment. Woan’s (2001b) preliminary study of municipal bond CEFs provided highly statistically significant evidence contrary to Pontiff’s assertions for both national and single-state municipal bond CEFs. According to the industry statistics provided by the Closed-End Fund Association (2001), as of the end of 2000, national municipal bond closed-end funds have the largest net assets of over $38 billion followed by single state municipal bond closed-end funds with net assets around $14 billion. In 2000, these two types of funds posted average total returns of 16.7% and 15.6% based on market price and net asset value respectively compared to the negative 9.1% return of S&P500 Index. Thus, the study of the valuation of municipal bond funds is of great importance. The objective of this paper is to extend Woan’s (2001b) study and provide a statistically more robust study on the potential determinants of the cross-sectional variation of fund discounts/premiums for two types of municipal bond CEF: national and single-state. In Section 1 the variables to be included in this investigation are presented in conjunction with a brief review of the relevant past studies related to this area. Section 2 describes the data. Section 3 presents the empirical results and Section 4 presents the conclusions. Various accounting (net asset value) and market-based variables have been proposed to explain the general closed-end fund puzzle. With the exception of Woan’s (2001b) preliminary study, research on the closed-end fund puzzle has concentrated on equity and/or non-municipal bond funds. These studies, using various combinations of the following variables have given inconsistent and mixed results. This section reviews these studies and proposes proxy variables to be included in the current study. Although directional hypotheses are given for each variable, they are tentative in nature given the complexity of aggregate market behavior. Brauer (1988) indicated that open-ending of a significantly discounted closed-end fund provides significant benefit to shareholders and that expense ratios are proxy measures for managerial resistance to open-ending of closed-end funds.
Effects of Introducing International Accounting Standards on Amman Stock Exchange
Dr. Mufeed Rawashdeh, Central Washington University, Ellensburg, WA
This paper examines the effects of introducing international accounting standards on Amman stock exchange. Using a sample of 18 adopting firms and 33 non-adopting firms from Amman Stock Exchange over the period 1989-1990, the study regresses the cumulative abnormal return (CAR) of the year of adoption on Unexpected earnings, Unexpected changes in debt ratio, and an indicator variable (1=adopter and 0=non-adopter). The results of the study reveal a significant impact on stock prices of adopting firms compared to nonadopting firms. This means that International Accounting Standards provided extra information beyond the so-called local Jordanian standards. The results were more significant for the smaller firms. The results give evidence of the value of International Accounting Standards for investors and give further impetus towards international accounting harmonisation. Jordan, like many other developing countries, had an incomplete and incomprehensive set of accounting standards, which were developed on piecemeal basis as a reaction to solve an emergency or serve an immediate need. Jordanian accounting standards were mostly very general statements lacking any itemization or guidelines for measurement and reporting. For example, the Companies’ Act of 1989 requires Jordan companies to prepare annual reports in accordance with Generally Accepted Accounting Principles (GAAP). However, there was no law, interpretation or discussion on what constitutes GAAP. Similarly, Amman Stock Exchange (known then as Amman Financial market (AFM)) and Income tax authorities required companies to maintain accounting books and provide financial statements; again no instructions were provided on both form and contents. In 1989 the Jordan Association of Certified Public Accountants (which was established in 1987) recommended that Jordan firms adopt International Accounting Standards (IAS). Many companies voluntarily adopted the IAS in 1989 and following years. This study examines the effects of introducing IAS on AFM stock prices. The paper simply argues that if there is a significant impact on stock prices of adopting firms, compared to nonadopting firms, then IAS has resulted in more information being provided beyond what is called local standards. Literature on the appropriateness of IAS for developing countries was divided between supporters and opponents. Supporters such as [Collins (1989), Fleming (1991), Wyatt (1991), Samuels and Piper (1985), Belkaoui (1988), and Kawakita (1991)] argue that the adoption of IAS by developing countries will facilitate development of capital markets and so lead to economic development. Opponents, on the other hand, argue that the environment in developing countries is completely different form that in developed countries, which makes the IAS in appropriate [Samuels and Oliga (1982), Hove (1990), and Perera (1989)]. Perera concludes that the strong Anglo-American cultural influence on IAS makes then irrelevant to developing countries. In the middle, researchers such as [Scott (1986), Talaga and Ndubizu (1986), and Belkaoui (1988)] agued for the adoption of IAS, but o the extent that the standards can meet local cultural, political, economic and other environmental conditions of individual countries. Empirical research examining the consequences of adopting IAS is very limited. Niskanen et. Al (1994) examined whether earnings calculated based on IAS contain incremental information over and above the Finish accounting rules for a group of listed Finish firms. They found that IAS earnings contain significant incremental information content beyond those reported on Finish standards. Auer (1995), on the other hand, reported that IAS-based earnings do not possess statistically different information beyond earnings prepared on the basis of Swiss GAAP or EC-Directives. On the local side of Jordan, Juhmani (1996) investigated the effect of introducing IAS on Amman Financial Market. He compared the stock returns of both sample and control groups around the earnings announcement over a short window in the year of adoption and the preceding year. He reported that the stock returns in the adoption year were different from the preceding year and concluded that adoption of IAS resulted in new relevant information being revealed. Juhmani failed to control for unexpected changes in earnings and/or other firm specific variables, which may impact stock prices. This paper controls for these variables and extends the window to cover the whole year of adoption to capture any announcement or leakage of such news during the year. This research is of great importance for both the International accounting standards Committee (IAC) and the policy makes of developing countries. IAC promote the adoption of International accounting standards (IASs) on the grounds that they facilitate the development of equity markets and economic growth. Policy makers of other developing can use the findings of this research to decide on whether their countries should adopt IASs. The findings reveal a significant impact on stock prices of adopting firms compared to non-adopting firms. The results were more significant for the smaller firms. The results give evidence of the value of International Accounting Standards for investors and give further impetus towards international accounting harmonisation. Since the Jordan Association of Certified Public accountants made the recommendation for the adoption of International Accounting Standards in 1989, the annual reports of all firms listed in the Amman Financial Market for the years 1989 & 1990 were screened. During the years 1989 & 1990, 103 & 105 companies were listed and traded in the Amman Financial Market, respectively. These firms were distributed among sectors as shown in table 1. A firm is considered an adopter of the International Accounting Standards if the audit report states that the financial statements were prepared in accordance with the International Accounting Standards. It was found that 20 firms, in the year 1989, and 28 firms, in the year 1990, had voluntarily adopted the International Accounting Standards.
An Analysis of the Racketeer Influenced and Corrupt Organizations Act (RICO) as a Control Mechanism for Business Activity
Dr. William Neese, Bloomsburg University of Pennsylvania, Bloomsburg, PA
The Racketeer Influenced and Corrupt Organizations Act (RICO) has become one of the most important laws affecting business activity today, and these statutes have proven to be multidimensional and extremely flexible in criminal or civil applications. Legal business entities have vicarious liability for acts committed by individuals within that organization whether anticipated or not, so management should be cognizant of RICO issues. The purpose of this study is thus to provide a thorough background discussion of the law. Specifically, an analysis of one set of cases attempting to control marketing activity is conducted to classify and discuss RICO scenarios for strategic and tactical consideration. The Racketeer Influenced and Corrupt Organizations Act (hereafter referred to as RICO) was originally intended as a mechanism to thwart organized crime, yet due to its broad language, multiple predicate acts, and civil as well as criminal provisions has become one of the most important legal-regulatory influences impacting business entities today (Cheeseman 1995; Feldman 1998; St. Laurent 1997). The use of civil RICO to "resolve ordinary commercial disputes arising between perfectly reputable business firms" (Brickey 1995a, p.485) has exploded since the early 1980s, and these cases are mostly based on some form of commercial fraud (Feldman 1998). This escalation has alarmed many expert observers due to the serious nature of potential penalties, including divestiture of ownership, restrictions on future investment and/or management activity, treble damage awards to successful plaintiffs, and even business reorganization or dissolution (Feldman 1998; Luccaro et al. 2001). In fact, most proposals to modify the RICO statutes have focused on revising the civil provisions (Brickey 1995a; Podgor 1993), and there are apparently some - albeit few - limits on its application (Brickey 1995a; Luccaro et al. 2001; Podgor 1993; St. Laurent 1997). For example, legislative reform activity includes the Private Securities Litigation Reform Act (PSLRA) of 1995 (Brickey 1997, p.68), which "amended the civil RICO statute to severely limit civil RICO actions based on securities fraud." Yet in Mathews v. Kidder Peabody (1998), the Third Circuit Court of Appeals held that the PSLRA did not apply retroactively to that case at bar. The Supreme Court has also recently limited a statutory attempt to deny RICO claims under the McCarran-Ferguson Act, which states: "No act of Congress shall be construed to invalidate, impair or supersede any law enacted by any state for the purpose of regulating the business of insurance" (Brostoff 1998, p.19). In this case, beneficiaries holding group health insurance policies filed a RICO suit predicated on mail, wire, radio, and television fraud alleging that they were deceived into making excessive co-payments. A consortium of insurance firms and trade associations joined the defendant seeking to prevent the action from going forward claiming state rights to regulate insurance providers protected under the McCarran-Ferguson Act were being violated. The United States Supreme Court disagreed, ruling that RICO legislation is complimentary to rather than frustrating of state insurance regulation (Anonymous 1999; Bell 1999). According to Moohr (1997, p.1141), "Federal prosecutors continuously press for more expansive interpretations of statutes in order to secure convictions for a greater range of conduct." This is made feasible largely due to the vague nature of statutory language intended to preserve that law's integrity and avoid legal loopholes (Feldman 1998); the body of language governing RICO has been called "confused, inconsistent and unpredictable" (Albright 2001, p.665). The downside risk is potential for arbitrary, discriminatory, or otherwise unintended enforcement on the criminal side by various legal officials, such as prosecutors seeking to further their careers through convictions of high-profile corporate executives all-the-while ignoring equally culpable yet less glamorous cases (Batey 1997; Richman 1999), as well as uncertainty in civil cases (Albright 2001). Given the vicarious nature of corporate liability for the illegal activity of employees and potentially devastating consequences (Bennett 2000; DeMott 1997; LeClair, Ferrell, and Ferrell 1997; Mascarenhas 1995), it is in the best interest of all involved to understand and prevent criminal or civil RICO charges from ever being brought. The intent of this analysis, then, is to provide marketing managers, academics, and other interested laypersons with a comprehensive background discussion of the major components of RICO legislation and court rulings first, then illustrate how these bedrock issues are applied in the marketing context through legal case analysis. The Racketeer Influenced and Corrupt Organizations Act was passed in 1970 as Title IX of the Organized Crime Control Act, and was specifically "designed to strike at the economic base of organized crime by imposing severe criminal penalties" (Brickey 1995a, p.417). Statutory language delineating RICO is housed in 18 U.S.C. §§ 1961 through 1968. Section 1961 defines terms used in the statutes, and lists dozens of federal and state crimes as predicate acts (i.e., criminal behavior upon which subsequent RICO charges are based, such as murder, kidnapping, counterfeiting, theft from interstate shipping, embezzlement, extortion through credit transactions, interstate transportation of stolen property, and multiple other offenses). Violations of 18 U.S.C. § 1341 relating to mail fraud and 18 U.S.C. § 1343 defining wire fraud are two of the most commonly charged predicate offenses (Bennett 2000), are particularly relevant in marketing, and are therefore the focal predicate acts for the RICO analysis reported here. Mail and wire fraud are inchoate or incomplete crimes that prohibit "an act performed in anticipation of committing another criminal act… [and that] mandate punishment even when the actor has not consummated the 'target' crime that is the object of his or her efforts" (Moohr 1998, p.16).
Chinese Consumer Behavior: A Cultural Framework and Implications
Dr. Wen Gong, Rochester Institute of Technology, Rochester, NY
Although consumer decision making remains a focal research interest (Bettman, 1998), international marketers continue to need a better understanding of cross-cultural issues and their effects on decision making. This paper explores the impact of Chinese culture on each stage of Chinese consumers' decision making process. Then, a preliminary analytical framework is developed for international marketers in their conceptual rethinking and management decision making when marketing in China. Potential implications are derived and discussed. The need for greater cross-cultural understanding of consumer behavior has been proclaimed by both international marketing practitioners and researchers as essential for improving international marketing efforts (Briley et al., 2000; Hampton and Gent, 1984; Leach and Liu, 1998; McCort and Malhotra, 1993). Research has shown that differences in value systems across various cultures appear to be associated with major differences in consumers’ behavior (Grunert and Scherhorn, 1990; Lowe and Corkindale, 1998; McCracken, 1989; Tansuhaj et al., 1991). With its one billion plus population, China has the greatest number of consumers in the world. It has achieved tremendous economic growth since the adoption of the open-door policy in 1978. As a result of the market reforms, and because of the sheer market size it presents, China has increasingly become a coveted market. Joining WTO will make China ever more interconnected to the global economic system. These factors warrant an increase in research attention to this market. What adds urgency to study is a solid understanding of the Chinese consumers culturally. No doubt, this will provide international marketers with valuable information for formulating marketing strategies as well as creating advocacy messages and corrective responses. Additionally, Chinese consumer behavior may render tremendous implications for the Greater China and other Eastern societies such as Singapore and Malaysia where Confucian cultural values still have a profound influence regardless of economic achievement. The Chinese culture has long been perceived to be collective-oriented characterized by a set of relationships defined by Confucian doctrine, including living properly (being polite and obeying the rules), respect of authority, desire for harmony, reduced competitiveness, contentedness, conservatism, order and stability in society and tolerance of others. While an individual in the West identifies him/herself as a separate entity stressing on self-reliance and equality, in the Chinese culture, an individual is inherently connected to others and fosters relationships through reciprocity, sentiment, and kinship networks (Joy, 2001). This leads to the Chinese focus on ‘face’ (Ho, 1975). Although a human universal, the ‘face’ concept is particularly salient for people of Confucian culture within which proper living, social consciousness, moderation and moral self-control are stressed. As Ho (1975) and Hsu (1985) observe, face is more a way of meeting the expectations of others than acting in accordance with one’s own wishes. Western culture is seen to be adventurous in nature and the mode of living is characterized by change and movement as well as confrontation with or ‘active mastery’ of the external environment (Hong et al., 1987). In contrast, the Chinese way of life centers on adaptation or ‘passive acceptance’ of fate by prizing stability and seeking harmony and happiness with the given, natural conditions. The dominant approach to studying consumer behavior has been to study the elements of the consumer behavior decision process since the publication of the consumer behavior textbook by Engel, Kollat and Blackwell (1978) (Fletcher, 1988). Culture, along with other elements of the environment, affects all stages of the process. Need/problem recognition is related to both the desired state and the actual state of a consumer and may be triggered by changes in either state. Based on the need producing situations in people’s everyday life, Bruner (1983) classifies consumers into two groups, the Actual State Types whose problem recognition occurs most frequently due to changes in the actual state; and the Desired State Types whose problem recognition occurs more as a result of the desired state changing. The Chinese are used to believing modesty and self-effacement. While the Western culture emphasizes individual, competitive behavior, the Chinese culture stresses balance, moderation and harmony, which have been referred to as ‘cult of restraint’ (Lifton, 1967). Chinese consumers are reluctant to pioneer. They are not demanding and tend to have few desires. As a result, Chinese are slow to accept new products and services. The "prudence" or "risk averse" characteristics embedded in the Chinese personality may help explain their reluctance in trying a new product or service. In a recent study of China’s emerging residential property market, Wang and his colleagues (2001) found that, despite a host of measures by the Chinese government encouraging private ownership of housing,
Research on the Relationship Between Market Orientation and Service Quality--An Empirical Assessment of the Medical Industry in Central Taiwan
Ya-Fang Tsai, National Yunlin University of Science & Technology, Taiwan, R.O.C.
Since people in Taiwan have gradually been paying more and more attention to the quality of health care service, it has become ever so more important to improve and maintain the quality of hospitals. One of the probable quality-improve-project to emphasize the concept of market orientation of clinical members. Because nurses are at the forefront in providing health care, they were selected, as the subjects of this study. In this study, we used the structural questionnaires in consulting with nurses in five hospitals in central Taiwan to investigate their attitudes and relation between the concept of market orientation and their service quality they provide. Questionnaires were completed in August 2002, and from the 200 sent out, 145 were returned, of which 131 were usable, making for an effective questionnaire feedback rate of 65.5﹪. This study clearly points to a positive relation between the concept of market orientation of nurses and their service quality. Besides this, the operation style of a hospital is an important factor which affects their views to market orientation. In recent years, on account of the carrying out of some parts of Taiwan’s health care policy, the health care industry in Taiwan has been under increasing competitive pressure. Faced with this environment, there is a greater need for hospital executives to understand their patients’needs, and incorporate customer orientation and market-driven forces into their marketing strategies（Mohr, 2001）. For this reason, hospital executive units have been devoting a great deal of time and effort into better understanding how to establish stronger relations among doctors and patients and how to improve the quality of their services to further enhance business performance .Thus, how to offer clinical services and provide clinical environments which fully meet patients’needs has become a critical issue. The concept of ‘market orientation’ is derived from the conventional marketing concept. The concept emphasizes that a firm should understand what customers’ needs are. However, competitive pressure and environmental factors are more complicated in today’s clinical business environment. Clinical executives must not only consider how to offer products to ingratiate patients’ needs but also think about the competitive factors. Through marketing research, the marketing managers of hospitals could analyze their competitors’ marketing activities and then formulate their own marketing strategies. They should not only consider external marketing factors, but also have a very good firm constitution to support marketing activities in the external or inner environment. In other words, they should enhance the inner coordination and integrate measures of cooperation functions. Take physical examinations within a community as an example. Obviously, the examination department has an essential role to play here; however, above and beyond them, technicians and nurses should similarly be involved. Apart from this, in doing dental check-ups, dentists must participate. Therefore, a marketing activity of a hospital should combine with multi-parts of coordination function. In recent years, a great deal of research (Sheth & Sisodia, 1999, for example) has made notable contributions to the literature pertaining to market orientation. These include studies on the relationships between market orientation and styles of strategies, the value of a company, innovation, organizational performance and the performance of new product development and so on. (Mastsuno & Mentzer, 2000; McNaughton, Osborne, Morgan& Kutwarw, 2001; Kim & Srivastara, 1998; Conrad, 1999; Louser, 2000). Nevertheless, for the most part, the majority of these studies have very few been about the health care industry even though clinical services, which concern people’s health and lives, cannot be ignored. Bearing this in mind, this empirical study tests the relationship between market orientation and service quality in the hospital industry. Kotler (2000) has claimed that marketing concepts are determined by the target market, customer orientation and the coordinated marketing to maintain customer’s satisfaction. He has determined that market orientation is composed of three elements, namely intelligence generation, the dissemination of intelligence and responsiveness. Aware that a market-oriented organization is defined by the actions and marketing strategies of that organization, Kohli and Janorski (1990) stated that market orientation is a kind of behavior, that includes the intelligence generation (formal process and informal process), the dissemination of intelligence, profit orientation, customer orientation, responsiveness, and so on. In addition to the above-mentioned scholars who consider market orientation as a process or the actions involved in the communication of information, others refer to it as a set of strategies or an organizational culture. Deshpande and Webster (1989)reported that market orientation is not only a policy, but also a kind of organizational culture and atmosphere, which inspire personnel to be more effective in their behavior. Slater and Narver (1994) also referred to market orientation as an organizational culture , which ensures that an organization continues to deliver superior value to its customers and that superior value is a commitment of the organization to its customers. This company culture is composed of three components: customer orientation, competitor orientation and interfunctional coordination orientation.
Adoption of e-Governance: Differences between Countries in the Use of Online Government Services
Dr. Satya N. Prattipati, University of Scranton, Scranton, PA
The emergence of digital economy has affected the functions and roles of the governments. The advent of e-government has been one of the main impacts of Information and Communication Technologies (ICT) on the Governments. Many governments have realized the importance of ICT to bring efficiency and transparency to the functioning of the governments. While many governments have started offering some government services through the Internet, there is a significant variation among countries in the actual use of these services by the citizens. Governments cannot realize the potential benefits of e-Governance unless the people use them. This study attempts to identify the factors that influence the use of e-Governance services by analyzing the differences between countries with varying degrees of use of online services offered by the governments. The results identify four factors that are significantly associated with the use of online government services. By concentrating on these factors, the governments can make more people use online services offered by them. e-Government can be broadly defined as the use of Information and Communication Technologies (ICT) to improve the activities government organizations (Heeks,2002). However, some definitions restrict e-Government to Internet-enabled applications only. This paper uses the later definition. There are three main domains of e-Government: Improving government processes (e-Administration), Connecting citizens (e-Citizens and e-Services), and Building external interactions (e-Society). The focus of this paper is on connecting citizens (e-Citizens and e-Services). Such e-Services deal with the relationships between government and citizens. They involve, talking to citizens, listening to citizens, and improving public service. As users (ICT), governments can play a significant role by creating new models of governance, educating their own employees to the opportunities of ICT, improving the delivery of government services, strengthening the democratic process, and generating significant savings and revenues for the economy as a whole. e-Governance can promote democratic processes, open government, and transparent decision-making in governments. In recent years, a large number of countries have made significant progress in this area. According to an UN estimate (United Nations 2002 ) 169 out of 190 governments have some kind of website presence. A few dozen governments provide “interactive” elements and transactional e-government services are available in seventeen countries. e-Government is rapidly becoming a priority in developing countries. However, the realization of the full potential benefits of e-Governance depend not only on the implementation of the business processes based on e-Governance but also on the willingness of the citizens to adopt and use the on-line services offered by the government. The government should identify the impediments and factors that affect the citizen’s use of on-line government services and take effective measures to promote those factors that encourage the people to use on-line services. Initially, the Internet was used primarily to provide government information online. Now all countries are making progress in developing online services. By the year 2000, most governments have articulated some kind of e-Government strategy as a basis for comprehensive programs and established goals at the central as well as department levels. Generally, the targets have been concerned with the number of government services online, and not with usage. There are some targets relating to e-procurement and e-tendering. e-Procurement offers the promise of a more transparent and efficient way for governments to purchase goods and to tender projects. Some countries have provided facilities to pay taxes on line and to apply for government benefits, permits, etc . Online technology is also widely used for the promotion of e-democracy, through e-voting, online consultations, and other forms of online citizen participation. Governments have chosen different approaches to getting services online. Most countries have not prioritized by citizen demand, but instead, put services online primarily in order of ease of implementation. For many countries, continued progress (in offering online services) will depend on overcoming the challenge of restructuring back-office systems and processes that are rooted in established bureaucratic procedures. e-Governance has a huge potential to make a significant impact on the daily lives of citizens of developing countries who have been suffering due to poor governance and systemic corruption. There have been many anecdotal of cases e-Governance implementation that demonstrated significant benefits in terms of efficiency of service and reduced corruption. One such case implemented in India is presented below. A project implemented by the state government of Karnataka, India. Bhoomi means “earth” in Indian language(s). The objective of the project was to computerize approximately 20 million village land records and provide an effective interface to around 6.7 million farmers. Farmers need those land records to obtain farm credits, and buy and sell lands. The manual system that existed before was cumbersome and prone to errors and delays and provided opportunities for the officials to harass farmers and extract bribes. Land Records Kiosks (known as Bhoomi Centers) were installed in 177 village clusters. Each Kiosk has been equipped with computers linked to a central database. Farmers can now obtain land records by paying a nominal fee of INR15 (USD 0.30) as compared to few hundred Indian Rupees rumored to be paid as bribes earlier. This transaction takes only few minutes to complete in the new system as compared to several days under the old system.
Perceptions of Effectiveness - A Study of Schools in Victoria, Australia
Martin Samy, Monash University, Victoria, Australia
A quantitative effectiveness measurement based on the perceptions of a group should be an effective mode of evaluating the level of satisfaction. This study establishes such a measurement in relation to school effectiveness through the Quality Situation Assessment Instrument. In order to measure the level of effectiveness perceived by their communities, educational institutions can use this instrument to calculate the Quality Effectiveness Index. This pilot project provides evidence that perceptions of school effectiveness might not necessarily be associated with Student Achievement Data. This paper is based on a pilot project, which was undertaken as part of a wider study. The study examines the concept; ‘perceptions of effectiveness’ and uses data from school communities in Victoria, Australia. The paper introduces the reader to the system of education in Australia. The focus is on the state of Victoria and its catholic and state systems ‘School of the Future’ reform program. The research component of this paper analyses the Quality Effectiveness Index (QEI) of 6 schools selected across the educational regions in Victoria. QEI is a quantitative measurement of the perceptions of effectiveness of individual schools and is based on an established instrument (Poston: 1997) modified substantially for use in Australia. The first hypothesis is to acknowledge that there is no expected significant difference in the perceptions of effectiveness of school community participants between the perceptions of effectiveness between schools in the state and catholic systems in Victoria. The second hypothesis states there is no expected significant difference in the perceptions of effectiveness of school community participants between state schools in the country and metropolitan regions of Victoria. Further analysis investigates the direction and significance of the relationships between the QEI and Student Achievement Data (SAD) of schools. The study of education effectiveness is and will always be a controversial area of study. It is politically controversial because the future of the younger generation will set the perimeters for the future of a country and because school infrastructure is costly. Research into school effectiveness is prolific and has dispelled the fallacy that; ‘schools can do nothing to change society around them and has also helped to destroy the myth that the influence of family background is so strong on children’s development that they are unable to be affected by school’ (Reynolds: 1995: 3) Furthermore, educational organisations are complex entities that are based upon input of economics resources, which in an Australian context is determined by a funding formula. With increasing demand for quality, effectiveness and high expectations for educational outcomes, governments are faced with the task of implementing popular policies that satisfy the perceptions of communities. Because of the complexity, research in this area has been challenging and many studies in the past have exhibited conceptual and technical flaws (Creemers: 1994: 9). As this is a pilot project, there are limitations to the statistical analysis that can be performed. This limited research introduces the Quality Situation Assessment Instrument (QSAI) and discuses practical implications for the measurement of school effectiveness based on the perceptions of those with an involvement with the school. In part 1 of this paper, the background of the study is briefly explained; the first section looks at the funding for education of the catholic and state systems in Victoria. It continues onto the discussion of Schools of the Future reform program commenced in the 1990’s and the substantial changes to the reforms in the new century. In part 2, the research analysis looks at previous studies, methodology, the instrument used in this study, variables and their measurement and the sample and data collection. The final section of part 2 details the statistical findings of the study. Under the constitution, education in Australia is the responsibility of the states. Each state or territory has individualized systems of education. Although the federal government is not directly involved in the educational objectives of the states, it ‘plays’ an important role because it provides grants to the states (Education Victoria: 1998). The state raises most of the funds for resourcing through its budget process. The federal government provides special purpose grants and in addition to these, individual schools raise funds through their local communities. The federal government also provides funds to the independent school and the catholic school systems.
The Influences of Personal and Sociological Factors on Consumer Bank Selection Decision in Malaysia
Che Aniza binti Che Wel, Universiti Kebangsaan Malaysia, Malaysia
Sallehuddin Mohd. Nor, Universiti Kebangsaan Malaysia, Malaysia
The authors discuss the influences of personal and sociological factors in the consumer bank selection decision in Malaysia (an Oriental Culture), where the family and social relationship are still highly valued. The study found that personal factors have greater influences in bank selection decision compared to sociological factors. Under the present environment, almost everyone is involved with a bank, one way or another, in our financial dealings and economic activities. In his personal capacity, an individual may require the services of a bank to cash his pay cheque, apply for a cashier's order (or banker's cheque) or a demand draft for a share issue, send money to someone locally or overseas by way of demand draft, mail transfer or telegraphic transfer, open or maintain a current account or savings account, place Certificate of Deposits, secure a housing loan for purchase of a house for own residence, or even secure personal loan or an overdraft for investment purposes. Thus, the services of banks are necessary for our daily economic and financial activities. According to digested surveys from various industry resources, about 59 percent of American households have a "one-account relationship" with any single bank. The percentages are higher for savings and loans (S&L) and thrifts, lower for credit unions. This estimate excludes "ATM card ownership" as a product (Violano and Collie, 1992). The same phenomenon may hold true in Malaysia. It is, therefore, before such relationship can be developed, it is important that we should understand the factors that influence the consumer bank selection decision. The study therefore, focuses on the two major factors (personal and sociological) that are believed to have great influences in the consumer bank selection decisions. Several consumer behavior models which are anchored to learning theories have focused on how consumers make choice decision over time (Andreasen 1965; Engel, Blackwell, and Miniard 1986; Hansen 1972; Howard and Sheth 1969). Howards and Sheth (1969), proposed that consumers like to simplify their extensive and limited problem solving situations into routinized behavior by learning to reduce the number of products and brands under considerations into an evoked set, which is a fraction of the alternatives available and familiar to the consumers (Reilly and Parkinson, 1985). Limiting the choices into evoked set allows easy information processing and, therefore, simplifies the task of choosing (Hoyer 1984; Shugan 1980). Consumer decision-making efficiency also improves when the information processing tasks are simplified and bounded. The central argument of these theories is that consumers, due to limited capabilities of information processing, use a variety of heuristics to simplify their decision-making task and manage information overload (Bettman 1979; Jacoby, Speller and Kohn, 1974). Consumer is also motivated to reduce risk (Bauer, 1960; Taylor 1974). Perceived risk is associated with the uncertainty of magnitude of outcomes. Consumers develop a variety of strategies to reduce perceived risk. Cognitive consistency theories, such as balance theory (Heider, 1946) and congruity theory (Osgood & Tannembaum, 1995), suggest that consumers strive for harmonious relationships in their beliefs, feelings and behavior (Mcguire, 1976; Meyers-Levy & Tybout, 1989). The influences of society, family and reference group on consumer behavior are profound (Coleman 1983; Levy 1966; Nicosia and Mayer 1976; Sheth 1974a; Stanford and Cocanougher 1977). Through the process of socialization, consumers become members of multiple social institutions and social groups (Moschis and Churchill 1978). These social institutions and groups have powerful influences on consumers in terms of what they purchase and consume. Conforming to such social influences and pressures, consumers consciously reduce their choice and continue to engage in certain types of consumption patterns that are acceptable to the social groups, which they belong (Park and Lessing 1977). Such group influences are also captured in the normative component in attitude-behavior models (Miniard and Cohen 1983; Ryan 1982; Sheth 1974b; Sheth, Newman and Gross 1991). Social groups influences are coupled with powerful word-of-mouth communications (Arndt 1967). Consumers either actively seek information or experiences of other consumers or they overhear from other consumers their experiences. Several researchers have indicated that perceptions and behaviors are influenced by those or other, particularly in high perceived-risk situations (Grewal, Gotlied, and Marmorstein 1994). Generally called informational social influence, word-of-mouth communication can favorably lead towards consumer acceptance of products and marketers or can repel them. The pioneering studies by Everett Rogers (1962) on the diffusion of innovations suggested that opinion leaders through word-of-mouth communications could exert direct influence on other consumers to adopt innovation. Empirical research using different methodologies and approaches have been applied in various parts of the world examine the criteria which influence consumers in selecting their bank. Anderson et.al. (1976), did the survey in the United States on 466 samples indicated that ‘recommendation by friends’ as the main criterion in bank selection followed by ‘reputation’ and ‘availability of credit’. Chin Tiong Tan and Christina Chua (1976) did the same study in Singapore and concluded that social factors have a stronger influence than other variables, thus supported the Anderson et.al (1976) findings. The findings have intuitive sense as expected that in oriental culture, where social and family ties are closer, consumers are more vulnerable to the advise of friends, neighbor and family members.
Employee Expectations and Motivation: An Application from the “Learned Helplessness” Paradigm
Dr. Steven B. Schepman, Central Washington University/Ellensburg and Lynnwood, WA
Dr. F. Lynn Richmond, Central Washington University/Ellensburg and Lynnwood, WA
The effects of a perception of “helplessness” on a person’s sense of “self-efficacy” and situational control were analyzed in the context of the first of the three key relationships in the process motivation theory of Victor Vroom. Perceptions of levels of helplessness were manipulated in an experimental design using random, non-contingent feedback and failure on an initial task. Subjects’ level of perceived ability and control on a second task were assessed prior to beginning the subsequent task. Statistically significant differences were found between the levels of helplessness groups and the control group on perceptions of control as well as perceived ability to accomplish the second task (“self-efficacy”). The implications of such lowered perceptions of control and/or “self-efficacy” by members of organizations are discussed. The “learned helplessness” concept of individual psychology essentially holds that if the outcomes or “feedback” people receive in response to their actions appear to bear no predictable relationship to the actions which initiated them the initiators, in time, will come to believe that they are unable to control the outcomes associated with their own behaviors. Mikulincer (1994) specified two distinct reactions which can be associated with expectations of future control or lack of control. The one of greatest relevance to this paper he termed “personal helplessness” which can occur when people believe that they may lack the necessary skills or abilities to perform a particular task. In addition, feelings of personal helplessness may develop as a result of exposure to uncontrollable outcomes. Both instances can lead to a reduction in a person’s feelings of “self-efficacy” which Bandura (1977) defined as a person's conviction that he or she is (or is not) capable of successfully performing a behavior in order to produce certain outcomes. In the extreme, this belief can lead to situations in which individuals will not even attempt certain behaviors and in a more generalized form, is related to the individual’s perception of the likelihood of him or her being able to perform successfully (Bandura, 1977, 1986, 1997). Research has provided support for the general debilitating nature of such negative expectations on a wide variety of human responses, including in this paper our interest in academic and work performance (Peterson, Maier, & Seligman, 1993). The authors in the present report argue that the same body of data which they developed to assess empirical support for the “learned helplessness” conceptual framework might also be used to determine whether or not it can provide empirical support for the first of the three key relationships (“expectancy”, “instrumentality”, and “valence”) in Victor Vroom’s (1964) well-known paradigm of the employee motivation process. Vroom’s model holds that for a company-provided reward system to have the intended motivational power for its employees the following three key relationships must be present. The employees must believe that: (1) they will have the requisite skills/abilities to be able to achieve their performance targets (“expectancy”); (2) they will get the material and/or non-material rewards promised by the company if they achieve these performance targets (“instrumentality”) and (3) the promised rewards will be valued by the employees (“valence”). While a disconnect in any of these three key relationships is likely to substantially depress employee motivation, our interest in this paper is to report on just the first of the three key relationships--the one called “expectancy” by Vroom (Van Eerde & Thierry, 1996)..The first key relationship in Vroom’s model holds that for a company-provided reward system to have the intended motivational power for its employees, the employees must believe that they will have the requisite skills/abilities to be able to achieve their performance targets (“expectancy”). On the individual employee level, the “self-efficacy” concept refers to an individual employee’s belief in his/her own personal ability to successfully perform (or not) according to the work expectations. That is, according to Vroom, the first of the three necessary elements underlying employee motivation is for the employee to believe that if s/he applies him- or herself, s/he will be able to accomplish the performance goals. The original purpose of the study was to further explore peoples’ initial reactions to exposure to uncontrollable events using the traditional learned helplessness laboratory paradigm. Feelings of personal helplessness were measured following a traditional learned helplessness training task to assess the effects on self-efficacy and individual expectations for being capable of accomplishing future tasks. In this paper the authors apply their empirical data to Vroom’s first step (“expectancy”) in his model of the employee motivation process. One hundred eleven undergraduate psychology students were included as subjects in the study, earning extra credit for their participation. Subjects were randomly assigned to high helplessness (n= 30), low helplessness (n= 44) and no helplessness/control groups (n= 37).
Holding Cost Reduction in the EOQ Model
Dr. Peter J. Billington, Colorado State University – Pueblo, Pueblo, CO
The introduction of a capital expense to reduce the setup cost has expanded the classic EOQ model into numerous new research insights. The original intent of that research was to reduce the order quantity to better fit the JIT lean manufacturing model of setup reduction. In this paper, the EOQ model is further studied with a reduction in the per unit holding cost to determine if total cost can be reduced, even though the order quantity is increased. Results show that the total cost can be reduced under specific situations. This new model is combined with previous research on setup cost reduction to show that further total cost reduction is possible. The classic economic order quantity model (EOQ) is often studied as a way to analyze the trade-off between setup and holding cost to minimize total annual cost of holding inventory and setup (or ordering). With the publication of two seminal articles (Porteus, 1985 and Billington, 1987), the EOQ model was expanded to include a capital expense to reduce setup costs. This new model was in line with and helped explain the practice of setup reduction, necessary for JIT systems to work effectively by allowing a more economical, smaller order quantity. The resulting series of research on setup reduction includes articles by Spence and Porteus (1987), Paknejad and Nasri (1988), Nasri et al (1990), Kim et al (1992), Hong et al (1992), Hong et al (1993), Hong et al (1996), among others. The remaining component of the EOQ that has not been studied is the holding cost, which includes the cost of funds invested in inventory, the storage facility, handling inventory, insurance, taxes, obsolescence, spoilage, deterioration and theft (Heitger, 1992). The ability to reduce the holding cost per unit is limited due to the fixed nature of many of these holding cost components. Internal rates of return are set, and insurance and taxes are out of the control of decision makers. However, the cost of handling inventory could be reduced through automation, and the cost of obsolescence and spoilage can be reduced through capital expenditure. Consider the following examples. Without refrigeration and freezers, the spoilage rate of produce and frozen goods is high and very fast. If a supermarket did not have freezers and refrigeration, produce would spoil and frozen goods would thaw. The result would be a very limited amount of produce, and consumers would be forced to buy on a daily basis. In fact, in the early part of the 1900’s in large cities, it was not uncommon for the “vegetable man” and the “ice man” to daily cruise the residential streets selling their produce and ice. Shoppers would buy just enough to be consumed before spoilage of the produce. The advent of electric refrigerators for the home allowed consumers to purchase larger quantities, often a week’s worth. Supermarkets install refrigeration units and freezers to allow a larger quantity of items to be stocked. The result is that the capital expenditure in freezers and refrigerators allowed the spoilage cost to drop dramatically, and the order quantity to increase, perhaps with a reduction in total cost. In the following model, Q is the quantity to order, S is the cost per order (or setup), D is the annual demand, and H is the per-unit cost of holding one unit of inventory for one year. The classic EOQ formula, shows that if H were decreased, then the Q value would increase, just as shown to have occurred in the previous examples. The total cost would decrease since the minimum total cost. However, the total cost in (2) does not include any capital cost to reduce H. The full holding cost reduction model will introduce this capital expense. Although some detractors will suggest that the EOQ has limited real applicability, this should not stop further analysis of the model to understand the trade-offs inherent in many types of inventory situations. The analysis in this paper should provide insights and stimulate further discussions of holding cost reduction. The EOQ model as shown in (1) and (2) will be expanded to consider an annual capital cost to buy a certain amount of reduction in H. The classic EOQ model is formulated with the assumptions that demand is stationary and deterministic over an infinite horizon, costs are known, and backlogs are not allowed.
Differences in Environmental Scanning Activities Between Large and Small Organizations: The Advantage of Size
Dr. Karen Strandholm, The University of Michigan-Dearborn, Dearborn, MI
Dr. Kamalesh Kumar, The University of Michigan-Dearborn, Dearborn, MI
The purpose of this study is to examine the environmental scanning differences between large and small organizations and their performance implications. Results from the data collected from 221 hospitals indicate that smaller organizations do not scan as broadly and as frequently as their larger counterpart. Results also indicate that there is an association between an organization’s scanning activities and organizational performance, both in larger organizations as well as smaller organizations. These findings, taken together, appear to indicate that the decreased scanning activity may place the smaller organizations at an information disadvantage, and hence competitive disadvantage, relative to the larger organizations. Environmental scanning provides organizations with information about opportunities and threats that could enhance their performance or threaten their survival (Beal, 2000; Bourgeois, 1980; Daft, Sormunen, and Parks,. 1988; Lang, Calatone, and Gudmundson, 1997). In order to capture the relevant information from the environment in a timely manner, managers must make a determination as to environmental scanning scope and environmental scanning frequency. As an organization’s environmental scanning scope increases, so does the information available to identify opportunities and threats (Beal, 2000; Jackson and Dutton, 1988). As environmental scanning frequency increases, so does the amount, timeliness, and relevance of this information (Beal, 2000; Hambrick, 1982). Although, environmental scanning cannot be viewed as leading directly to improved performance, organizations that scan their environment effectively are viewed as having an information advantage over those that do not, improving their ability to align with the environment (Daft, Sormunen, and Parks, 1988). It is generally agreed that organizations that align themselves with the environment outperform those that do not maintain this alignment (Beal, 2000; Chaganti, Chaganti, and Mahajan, 1989; Hitt, Ireland, and Stadter, 1982; Tan and Litschert, 1994; Venkatraman and Prescott, 1990). However, a key competitive disadvantage faced by smaller organizations, the relative lack of slack resources (Golde, 1964; Lang et al., 1997; Pearce, Chapman, and David, 1982), may force the smaller organizations to make choices as to environmental scanning scope and frequency, which could place them at an information disadvantage relative to their larger counterparts. While studies show that small organizations do obtain value from their environmental scanning activities (e.g. Beal, 2000; Dollinger, 1985; Lang et al., 1997), because these studies do not include a comparison to larger organizations, they are not of much help in determining whether smaller organizations are at an information disadvantage relative to the larger organizations. This is an important issue given that that ninety-nine percent of all businesses located in the U.S. are classified as small businesses and these businesses employ 52% of all private sector workers (U.S. Small Business Administration, 2001). Thus the purpose of this study is to examine the environmental scanning differences between large and small organizations and their performance implications. Environmental opportunities and threats can come from the task environment--those environmental sectors that have a direct impact on the organization such as the competitor, supplier and customer sectors--and/or the general environment--those environmental sectors that have an indirect impact on the organization such as demographic, economic, political/legal, social/cultural, and technological sectors (Daft et al., 1988; Jackson and Dutton, 1988). As such, it appears reasonable to believe that scanning the environment broadly in order to obtain information from as many different environmental sectors as possible will facilitate the organization’s alignment with the environment, notwithstanding organizational size. For example, the larger organization has the ability to enter into and compete in more product/market domains than the smaller organization (Yasai-Ardekani and Nystrom, 1996) due to the availability of more slack resources (Chen and Hambrick, 1995; Singh, 1990). As the number of product/market domains in which a organization competes increases so does the multiplicity and heterogeneity of environmental relations faced by the organization (Yasai-Ardekani and Nystrom, 1996). In order to effectively manage all of these different relationships, the larger organization needs a broad scanning scope. However, it may be even more important for smaller organizations to scan broadly than larger organizations. Environmental opportunities and threats can come from any number of sectors in the task environment-and/or the general environment (Daft et al., 1988; Jackson and Dutton, 1988). Organizations that do not scan broadly are not only likely to miss capitalizing on opportunities but also fail to guard against threats. While both the large and small organizations may be able to miss opportunities and survive, missing threats by the smaller organizations could threaten its survival. As compared to the larger organizations, the smaller organization generally lacks the resources to absorb the losses associated with not recognizing or misinterpreting a threat (Lang et al., 1997).
Intellectual Accounting Scorecard - Measuring and Reporting Intellectual Capital
Indra Abeysekera, Dynamic Accounting, Sydney, Australia
Several indicators are constructed to measure intellectual capital at organisational level and at item level. The majority of models constructed so far have not established the link between individual intellectual items and organisational intellectual capital performance. The few models that establish such a link demand significant management time to monitor them, or have established indices outside the traditional accounting system. The Intellectual Accounting Scorecard integrates intellectual capital measuring and reporting into mainstream traditional accounting reporting. Firstly it identifies each intellectual capital item as an intellectual revenue and intellectual expenses having an impact on the statement of income, or as an intellectual assets and intellectual liabilities having an impact on the balance sheet. Secondly, it constructs ratios to monitor operational and strategic performance. Although there is ambiguity as to whether intellectual capital represents all intangibles, the more popular definitions indicate that they refer to intangibles not recognised in the financial statements. A study in 1997 of top Canadian and US organisations reveals that non-financial will be the key to business success in the future. The organisations identified five broad categories to measure performance, and they are customer service, market performance, innovation, goal achievement, and employee involvement. The most commonly used performance measure in firms was customer service and market performance. The firms tend to rely on non-financial measures that have been used for some time and indicated that they rely less heavily on measures related to reputation, know-how, information systems, databases, and corporate culture although they play an increasing importance in the future to ascertain the performance of a firm (Stivers, Covin, Hall, & Smalt, 1997). Sveiby (1997b) outlines three reasons why companies do not want to measure intangible assets, and they are: managers themselves do not understand the importance of it; indicators can give too much information away to the competitors; and there is no rigorous theoretical model for such a type of reporting. Since accounting systems are not designed to extract such information easily, it could be time consuming and expensive to make such reporting. Even if they are measured, the research also reveals that firms did not want to share human capital indicators externally since they feared losing talented employees to competitors (Miller, DuPont, Jeffrey, Mahon, Payer, & Starr, 1999). On the other hand, capitalising intangibles leads to increase subjectivity of cash flow analysis, difficulty in breaking intangibles into individual valuations, and almost the impossibility of determining when the recognition criteria of intangible assets are met to include them in the balance sheet (Backhuijs, Holterman, Oudman, Overgoor, & Zijlstra, 1999). The use of non-monetary indicators can help to avoid such problems to some extent. Measuring non-financial data is still an art more than a science and in intellectual capital the choice of indicators can affect the results substantially (Roos, Roos, Dragonetti, & Edvinsson, 1997, p. 60). Like in environmental reporting (Kirkman & Hope, 1992), there is no universally acceptable model to measure intangibles. However, various models proposed at least point to the right direction (Guthrie & Petty, 2000a). The measurement of intellectual capital is important since most of the senior executives in organizations manage what has been measured (Roos & Roos, 1997) and the organization becomes what it measures over time (Hauser & Katz, 1998). To evaluate and compare the existence of intellectual capital, researchers have used three broad indicators at organizational level. These indicators are derived from the audited financial statements of a firm and are independent of the definition of intellectual capital of a firm. There are three major indicators to measure net intangible assets at a firm level (Stewart, 1997, pp. 224-229) and they are market to net book value, Tobin’s q, and calculated intangible value (CIV). Apart from them, other methods include direct intellectual capital method, Baruch Lev’s knowledge capital valuation and Paul Strassmann’s knowledge capital valuation. Intellectual capital is the difference between the market value and financial capital of that enterprise at a given date (Abdolmohammadi, Greenlay, & Poole, 2001; Dzinkowski, 2000; Knight, 1999; Roos,Roos, Dragonetti, & Edvinsson, 1997, pp. 2; Sveiby, 1997a, pp. 3-18). The reliability and usefulness of it can be improved by converting it to a ratio (Stewart, 1997, pp. 224-225), and is the most widely known indicator (Knight, 1999). The traditional accounting measures net identifiable assets using a combination of costing methods, such as historical costs, present value, replacement cost and market value. The market on the other hand values its net assets of a firm holistically, and they are assets and liabilities, both identified and not identified by the traditional accounting system. Some authors use market to net book value as the basis to construct indices. For example, the composite IC index is indirectly linked to the market value of the firm. When the index does not correlate with the market value, the choice of weights or indicators or the capital forms of the index are revised (Roos, Roos, Dragonetti, & Edvinsson, 1997, pp. 78-79; pp. 92-93). If the ratio is more than 1, it indicates that the organization contains intellectual assets not represented by the financial statements. Training as a percentage of payroll cost was significantly and positively associated with market-to-book value indicating that Wall Street values more highly firms investing in training than others. However, if the ratio is less than 1.0, the firm may still have intellectual assets but they can be masked by intellectual liabilities (Harvey & Lush, 1999; Caddy, 2000). This indicator was initially developed by the Nobel-prize-winning economist James Tobin to predict the investment behaviour.
Insights into Malaysian Consumers’ Perceptions of Products Made in the USA
Dr. Mohammad Sadiq Sohail, King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia
Dr. Syed Aziz Anwar, University of Sharjah, United Arab Emirates
This paper examines the country of origin effects of products made in the United States. The study focuses on the questions of the sources of information in evaluating products; the evaluation of specific product dimensions by Malaysian consumers, and consumers’ assessment of different product categories. Results based on an analysis of data relating to 240 responses indicate that the most common product information source was found to be through packaging of products. Products made in the United States had been rated highly for their competitive prices. Electrical appliances were generally found to be the most highly rated product category by Malaysian consumers. Consumers in developing countries have a range of options while buying products. The impact of country-of–origin ( in terms of the country of source or manufacture and country of brand ) on consumers’ perceptions of products has been widely examined and analyzed by marketing researchers for nearly four decades now ( Schooler, 1965; Samiee, 1994; Peterson and Jolibert, 1995). Behavioral researchers in the area of marketing have made incessant efforts to gain deeper insights into the perceptual decisions made by consumers. First, it has been highlighted in the literature that country-of-origin may be used by consumers as an attribute to evaluate products (Johansson et al., 1985; Hong and Wyer, 1989). Second, consumers’ attention and evaluation of other product dimensions may be influenced by, which may create a ‘halo effect’ (Erickson et al., 1984; Han, 1989). Third, country-of-origin may also act as a source of country stereotyping, thereby directly affecting consumers’ attitudes towards the brand of a country instead of attribute ratings (Wright, 1975). A note-worthy feature of the country-of-origin literature is that most studies have examined consumers’ perceptions of products from a wide range of countries. While this may be helpful in undertaking a comparative analysis, it minimizes the details on a specific country. While a number of studies have been conducted on country-of-origin effect on consumers in a wider context, no comprehensive study has been conducted relating to Malaysian consumers’ preference for and perception of goods made in the United States, which is an important trading partner of Malaysia. In the year 2001, Malaysia's total trade with the United States amounted to nearly US$ 28 billion, accounting for almost 20 % of its global trade. Despite a decline in bilateral trade between the two countries since late 2001, the US has continued to be a major trading partner of Malaysia. In 2001, the value of imports from the United States amounted to nearly US$ 11 billion. America is the second largest source (after Japan) of merchandising imports to Malaysia, accounting for about 16 % of total Malaysian imports in the year 2000. Malaysia’s consumer merchandise imports include canned vegetables and fruit products, electrical appliances, vehicles, tobacco manufactures and consumer durables. Against the backdrop of the significant volume of Malaysia’s imports of consumer goods from the United States, the aim of this paper is to examine and analyze the effects of country-of-origin on Malaysian consumers’ perceptions. More specifically, this study focuses on the following research questions: What are the sources of information used by Malaysian consumers in evaluating products originating from the United States, and how does this differ in terms of consumer demographics? How do consumers in Malaysia evaluate specific dimensions of products made in the United States and how do these factors vary in relation to consumer demographics? What is the Malaysian consumers’ assessment of different product categories? The paper progresses as follows. In the following section, relevant literature on country-of-origin effects is reviewed. Background information on Malaysia is then presented. The subsequent section describes the research method. Thereafter, the results of the study undertaken are discussed and analyzed in relation to each of the research questions. Finally, the paper highlights the implications of the findings for decision making and outlines some limitations of the study. While pioneering studies on the country-of-origin effect can be traced back to 1960s, one of the neat conceptualizations of this phenomenon was attempted by Nagashima (1970). He concluded that consumers associate with a given country of origin like “the picture, the reputation, and the stereotype that businessmen and consumers attach to products of a specific country. This image is created by such variables as representative products, national characteristics, economic and political background, history, and traditions”. Since then, a formidable body of literature has been built up to study the country –of-origin effect. Samiee (1994) regards the country-of-origin effect as any influence or bias that consumers may hold, resulting from the country of origin of the associated product or service. The sources of the effect “may be varied, some based on experience with a product(s) from the country in question, others from personal experience (e.g. study and travel), knowledge regarding the country, political beliefs, ethnocentric tendencies, (or) fear of the unknown”. The studies have mainly focused on consumers’ general perceptions about the quality of products made in different counties (Leonidau et al 1999; Bilkey and Nes, 1982; Peterson and Jolibert, 1995). Leonidau et al (1999) found that in Bulgaria, products made in the United States were liked the most, followed by products from Hong Kong, Singapore, Indonesia, and India. Cattin et al (1982) found that the Americans favored West German products over French and British goods.
The Impact of the Gramm-Leach-Bliley Act, - Disclosure of Nonpublic Personal Information on the Financial Institution and the Consumer
Mark S. Puclik, J.D., University of Illinois at Springfield, IL
Comparative Performance Measures and Information Dissemination Process of several Euro- Asian Equity Markets
M. T. Vaziri, Ph.D., California State University, San Bernardino, CA
Some of the equity markets in Asia has been established for a long period of times. For example, widespread sovereign borrowing got under way in the late 18th century, when the spread of constitutional forms of government led to more stable nation states that recognized continuing liabilities to lenders, so by 1866, some form of stock markets were operating in a country like Turkey. Net private flows into the emerging markets did not pick up again until the early 1990s. But the recovery thereafter was swift, with net inflows rising from $12 billion in 1988 to more than $100 billion by 1991. According to Boston-based Pioneering Management Corp. statistics, over the 50 years through 1995, emerging-market equities showed average annual returns of 16.5%, compared with 12.4% for the Standard & Poor's 500-stock index and 11.8% for the EAFE index. As a category of equity investment, emerging markets may be considered to have begun in 1986 under the sponsorship of the International Finance Corporation, an arm of the World Bank. For more than a decade stock markets have boomed in just about every country. From 1984 to 1994 the capitalization of world stock exchanges grew fivefold to a combined $18 trillion. Investments in emerging markets can result in spectacular returns, positive or negative. But picking potential winners, at the level of either country or company, is very difficult. It is clear that emerging markets carry considerable risks, including liquidity, lack of transparency, and sharp swings in prices. In general, S&P classifies a stock market as "emerging" if it meets at least one of two general criteria: It is located in a low- or middle-income economy as defined by the World Bank.Its market capitalization is low relative to its most recent GDP figures. Until 1995, S&P's definition of an emerging stock was based entirely on the World Bank's classification of low- and middle-income economies. If a country's GNP per capita did not achieve the World Bank's threshold for a high-income country, the stock market in that country was said to be "emerging." More recently, this definition has proved to be less than satisfactory due to wide fluctuations in dollar-based GNP per capita figures. Dollar-based GNP figures have been significantly impacted by severe swings in exchange rates, especially in Asia. Moreover, reported GNP figures, which take significant time to prepare, are often out-of-date by the time they are released. Accordingly, S&P has adopted new criteria for a market to graduate from index coverage. To graduate from index coverage, GNP per capita for an economy should exceed the World Bank's upper income threshold for at least three consecutive years. The three-year minimum limits the possibility that the GNP per capita level is biased by an overvalued currency. Some of the markets are still in its infant stage, and yet others are still expected to debut. Furthermore, there are other markets that are getting reborn, such as Egypt, which existed in principle for a century but only recently began operating again as a real marketplace for capital. Yet, other markets have been around for decades, and have formed the core of today's emerging market portfolios. There are still several long-established markets where trading takes place over a cup of tea, whereas many other markets have implemented the latest technology to expedite trading, settlement, portfolio management, market supervision, and information dissemination process. The recent uncertainty of the American and European stock markets raise the question of where to invest to get the highest return with the lowest risk. Investing in emerging stock markets or foreign countries in general, also means facing a currency risk, which should never be underestimated. This study provides exchange rate adjusted returns for the weighted Composite Index of 7 emerging markets of Asia (Bulgaria, Jordan, Egypt, Turkey, Pakistan, Iran and Croatia) and compare them with the S&P 500 Index, several portfolios, as well as themselves. These results indicate growth in opportunities to raise capital through the stock markets in developing economies and enhanced ability to diversify risk Emerging stock markets (ESMs) now exist in about 70 developing countries. These markets share about 12% of the global stock market capitalization and value traded and over 50% of number of listed companies. Performance indicators show a growing competition between the emerging and developed market, where for example the size of the market measured by the ratio of the market capitalization to GDP, show some emerging markets are larger than the developed markets (Bonser-Neal and Dewenter, 1999). Similarly, market liquidity measured by the ratio of value traded to market capitalization, indicate that some emerging markets are more liquid than the developed markets, an indication of a gained momentum in the development process. Emerging markets also show increased stock prices, which could be attributed to increased demand (Kim and Singal, 2000). The revitalisation process is characterised by a growing shift from periodic to continuous systems, automation of various services, reduced restrictions on foreign investment, improved settlement and clearance procedures, and strengthening of the legal and regulatory systems. The process is aimed at strengthening the institutional set up, thus create investors’ confidence and enhance competitive trading. Indeed, Röell (1992) and Khambata (2000) note that tight disclosure requirements and competitive auditing and accounting standards create confidence among investors to commit their resources to the emerging stock markets. Demirgüç-Kunt and Levine (1996) also observe that economies with strong information disclosure laws, internationally accepted accounting standards and unrestricted international capital flows tend to have larger and more liquid markets. In comparative terms, while the developed markets with well-established institutions are characterised as having high level of liquidity and trading activity, substantial market depth and low information asymmetry, the emerging market are said to exhibit more information asymmetry, thin trading and shallow depth because of their weak institutional infrastructure. Kumar and Tsetsekos (1999) demonstrate the implied differences between the emerging markets and developed stock markets institutional set up using 1980-1992 data set.
Copyright 2000-2016. All Rights Reserved