The Journal of American Academy of Business, Cambridge

Vol.  11 * Num.. 2 * September 2007

The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription

 

Submit Paper     *     Editors / Reviewers     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members / Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process.  The Journal of American Academy of Business, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC.  The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use www.editavenue.com for professional proofreading / editing etc...

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: jaabc1@aol.com; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

Leveraging Generational Differences for Productivity Gains

Nancy Patota, Iona College, New Rochelle, NY

Dr. Deborah Schwartz, e-TECHknowledge, Armonk, NY

Dr. Theodore Schwartz, Iona College, New Rochelle, NY

 

ABSTRACT

Much has been written to describe generational differences in today’s workforce, including the problems of managing a team of multi-generational workers. These differences are often a source of conflict among employees but they could also be a source of strength. Often, the strengths and weaknesses of one generation complement another generation. Prescriptive approaches to dealing with this reality are lacking.  Our prescriptive approach involves identifying necessary competencies and the related strengths and weaknesses of each generation by competency. This tool allows managers and employees to work with multigenerational teams who complement each others’ skills, values and beliefs to meet team/project objectives. We provide examples of typical business situations and show how this tool can be used to leverage generational differences. Today’s workforce is unique because there are four separate, distinct generations working side-by-side, frequently each with a different approach to their company, their co-workers and the work itself. This is not the generation gap of the past, where a generation grows up and becomes their parents. Instead, it is a convergence of four generations, where each one may be substantially different from the others and each is often on an entirely different path in work and in life. A good example of this phenomenon is shown in the novel, Generation X. It depicts three alienated Gen Xers dropping out of the rat race in rebellion against Baby Boomer values [Coupland, 1991].   The four generations in today’s workforce include: Traditionalists: life experiences formed by the Great Depression and World War II; Baby Boomers: influenced by economic prosperity and the Viet Nam War; Generation X: attitudes formed by instability of society’s institutions such as marriage (and divorce) and corporations (and downsizing); Millenials: key life experiences revolve around technology and a world where everyone is always “connected” through the Internet, cell phones, iPods, etc. For managers, the challenge is improving productivity of individuals and teams in a multi-generational workforce. Each generation has different life views and responds to different motivations. For employees, the challenge is to work effectively with members of other generations. With an appropriate mindset, the potential areas of conflict could be viewed as a source of a rich and rewarding work environment. Both perspectives involve the realization that there are generational differences which are not necessarily good or bad, but simply exist. Misunderstandings and strife often results between members of different generational groups because of the differences. Especially in times of reorganization and downsizing, members of different groups tend to view each other with suspicion and antipathy as they compete for fewer and fewer jobs. As a result, the strengths of each generation tend to be used sub optimally, or ignored instead of being leveraged to achieve better business results. In this article, we will present an overview of the four different generations found in today’s workforce. This overview is a summary of work done by many authors [Jackson, 1992; Lancaster and Stillman, 2002; Zemke, Raines and Filipczak, 2000]. However, we take the work of these generational experts a step further and identify the typical workplace strengths and weaknesses of each generational group. The article then describes our approach to dealing with these generational differences. The primary management tool we propose is the Generations/Competencies Matrix. The Matrix, which could be developed for each organization, shows the strengths and weaknesses of each generation by competency. The cells of the matrix provide an easy reference point for addressing areas of conflict among generations, for leveraging teams, and for training. Examples of how to use the Generations/Competency Matrix in some typical situations are provided. In this way, both managers and employees have an easy-to-use tool for successfully working with generational differences. A generation is a group of people who share common experiences and a common collective memory based on key events that occurred during their lifetime. The collective memories of a generation lead to a set of common beliefs, values and expectations that are unique to that generation. A generation does not grow up to become like their parents, but instead continues through their lifetime with a separate and distinct set of beliefs and expectations formed by the shared experiences.  Today’s workforce is commonly thought of as consisting of four, distinct generations, as shown below in Table 1. While the end dates of each generation vary slightly among different researchers, the shared life experiences are defined similarly in the literature.  Like most models of human behavior, the generational model provides a way of thinking about people but does not imply mutually exclusive categories. Not all people behave in the way that their common generational experiences would lead us to expect. For example, not all Millennials speak in technical jargon and not all Traditionalists are clear communicators.  Some of the exceptions to the model are explained by the crossover effect and the end dates of the generation. The crossover effect occurs when some event is so important that it affects more than one generation, so multiple generations share that common experience. For example, the tragedy of 9/11 impacted all four current workplace generations and is part of the shared memory of all of us. Another source of discrepancy is the difficulty of setting the exact dates for the ending of one generation and the beginning of a new generation, as shown by the different dates used by various researchers when describing the four, distinct generations.  Many researchers have conducted controlled studies that confirm the different shared experiences of each of the four generations. For example, Schuman and Scott collected data from over 1400 participants to confirm that different generations remember different defining events [Schuman and Scott, 1989]. Daboval compared Gen Xers and Baby Boomers and found distinct differences between their loyalty and commitment to an organization and the most effective motivators in terms of compensation and training. More recently, Arsenault confirmed the differences in collective memories and preferred leadership styles in a sample of 790 participants [Arsenault, 2004].

 

Contextual Barriers to Strategic Implementation:  An Examination of Frontline Perspectives

Dr. Karen A. Meers, Western Connecticut State University, Danbury, CT

 

ABSTRACT

Frontline employees are a valuable asset to many firms, but have been overlooked as a resource for strategic implementation.  This study measured the effectiveness of a newly implemented account management telemarketing program and asked frontline associates to identify the barriers to successful implementation of the initiative.  The sample consisted of 68 customer service representatives from a Fortune 500 company in the United States who had telemarketing calls added to their existing job tasks as a strategy to improve customer satisfaction ratings.  Effectiveness ratio data were collected twice over an eight week period to determine changes in telemarketing performance. Interview data were collected to determine the participants’ perspectives on barriers to the successful implementation of the program.  The participants revealed that job-reality mismatches were responsible for the four contextual barriers that hindered implementation success:  1) telemarketing was viewed as a low priority by the customer service associates in direct opposition to the perceptions of the management team; 2) stress and overload resulted from interfering job tasks and time pressures; 3) managers and associates lacked telemarketing experience; and 4) telemarketing was resisted.  Practical implications for utilizing frontline feedback in the strategic process to proactively address job-reality mismatches and implementation challenges are discussed. Throughout the years, strategic implementation has been an area of study that has been cause for debate and concern.  A cross-industry survey published in 2004 found that only 43 percent of executives rated their companies as having been “successful” or “very successful” at executing strategy initiatives (Economist Intelligence Unit, 2004: 254).  The results revealed that one in three executives rated their performance management systems and processes as effective.  According to Roney (2004), there are no authoritative references or sources of generally accepted principles to guide management in strategy implementation.  The flaw that has made strategic management unsuccessful on so many occasions is not in the existing theories of strategy, but in the methodology for implementation, or rather the lack of such methodology. In a literature review of 227 strategy-process articles published in research journals, Hutzschenreuter and Kleindienst (2006) found that strategic implementation received only limited attention.  They noted that the studies in this area focused predominantly on planning (Gottschalk, 1999; Grundy & King, 1992) or middle-management involvement (Floyd & Wooldridge, 1992).  Studies on successful implementation highlighted the behavior of key individuals involved in the strategy process (Floyd & Wooldridge, 1992), strategic decision commitment (Dooley et al., 2000), and learning (Miller, Wilson, & Hickson, 2004).   A gap in the existing theory and research is the perspective of the frontline employee on the success and failure of strategic initiatives. The existing literature primarily maps out steps for managers to follow, but only hints at utilizing the feedback of the frontline associate to evaluate the effectiveness of the changes that directly affect them and their customers.  Many theorists and researchers briefly address the importance of frontline associates in various stages of strategic management, but fall short of suggesting that they play a key role in diagnosing and preventing implementation failures.  Barthelemy (2006) mentions the need to field suggestions from associates for strategy formulation.  Roney (2004) notes that one prerequisite for positive implementation results is well motivated personnel, including individuals and groups at all levels.  Worley and Lawler III (2006) theorize that in order for firms to manage change effectively, they need to obtain input continually from employees to develop and evolve their business strategies.  Allio (2006) proposes utilizing a strategic process implementation checklist which includes cross-functional input as well as an item for diagnosing implementation failures and celebrating successes.  Recent articles suggest that frontline associates have an impact on many aspects of organizations; however, they fall short of defining a clear role for frontline assessments of implementation effectiveness.  Maruca (2006) postulates that top managers’ attention to the front-line experience can help create a mechanism that affects customers, employees, and the bottom line, improving in quality, performance, and profit.  Building a high performance culture among frontline staff, however, may be the most challenging element of outstanding front-line performance (Rogers & Davis-Peccound, 2006). Research has established that organizations with higher employee engagement experience higher customer satisfaction, profits, productivity, and lower employee turnover and accidents (Hart, Schmidt, & Hayes, 2006).  In other words, frontline associates are major players in many facets of organizational success; therefore, their knowledge may be a useful feedback tool for assessing implementation failures and realigning goals.  This study addressed frontline perceptions of a strategic implementation that directly affected their job responsibilities.  More specifically, this study sought to explore, “What barriers to effective implementation did customer service representatives identify?”  Answers to this question may provide useful insight for practitioners to utilize in anticipating, diagnosing, and correcting possible obstacles to successful implementation. This study measured the effectiveness of a newly implemented account management program which added telemarketing tasks to the existing responsibilities of customer service representatives.  As a follow up to the measurements, the customer service representatives were interviewed to determine their perceptions of barriers to successful implementation of the initiative. This study was conducted in a Fortune 500 company with offices located in the northeast, south, west, and central United States.  The sample consisted of 68 customer service representatives who had account management telemarketing calls added to their existing job tasks.  Telemarketing was viewed by upper management as one of the most important aspects of the job because it enhanced the relationship between sales and service, demonstrated innovation, and was perceived as a key component in the improvement of customer satisfaction scores awarded by the customers.  To increase the probability of implementation success, upper management expressed their commitment to this program verbally and in written form.  The associates were told about how their new responsibilities linked to the strategic mission of the company and department.  All of the participants and their managers attended a telemarketing training course and were given personal and departmental goals to reach. Quantitative data were collected twice.  The effectiveness ratio was chosen to measure the success of the telemarketing program because it measured changes in telemarketing performance.  The effectiveness ratio was determined by the telephone calls to customers that resulted in a sale divided by the total calls to customers over a one week time period. 

 

The Impact of Inventory Reductions upon Cash Balances

Dr. Richard Skolnik, State University of New York-Oswego, Oswego, NY

 

ABSTRACT

Innovations in operations and distribution have resulted in a reduction in the inventory to sales ratio.   Contemporaneously, the financial asset to sales ratio has increased. Existing research has shown that management discretion results in higher than optimal cash balances.  This study investigates whether a reduction in inventory requirements results in greater management discretion and an increase in cash balances. A regression model with quarterly data from the Federal Reserve Flow of Funds balance sheet accounts finds a negative relationship between inventory and financial asset balances. The results suggest that reductions in inventory requirements lead to slightly higher cash balances.  The model also finds that the financial asset to sales ratio is negatively related to economic growth. Increases in economic growth correspond to lower levels of financial assets relative to sales.  Technological advances and investment in operations management have led to decreases in inventory levels necessary to support sales (Irvine, 2003). Since the beginning of the 1980s, aggregate and industry-specific inventory to sales ratios have declined; however, data from the Federal Reserve Flow of Funds and Outstanding Balances (Federal Reserve Board of Governors, 2005) indicate that decreasing inventory requirements have been accompanied by increasing cash balances.  Does a decrease in inventory requirements lead to higher cash balances? Research by Opler et al. (1999) finds that managers maintain higher cash balances when they have increased discretion to do so.   This paper uses a regression model to test whether the increase in the financial asset to sales ratio is linked to the decrease in the inventory to sales ratio.  The paper is organized in the following manner. Section I surveys the literature on inventory and cash levels; Section II describes the data and trends, Section III develops a model and reports on statistical tests.  The economic impact of inventories has been studied on both microeconomic and macroeconomic levels (Blinder and Maccini, 1991).  Microeconomic studies focus on optimal inventory levels for profit maximization. Firms hold inventories for a variety of purposes which include work in progress, production scheduling optimization, stockout cost minimization, price speculation, price hedging and delivery cost reduction.    More efficient use of inventories can increase financial performance through impacts on both the  income statement and the balance sheet. On the profitability side, increased inventory efficiency can lead to lower storage costs per unit, increased sales, and lower scrap and obsolescence (Inman and Mehra, 1993). The subsequent increase in revenue and decrease in costs results in higher profits. Inventory efficiency impacts the balance sheet through a reduction in asset requirements: lower inventory leads to  lower current assets, which leads to lower assets.  Return measures such as Return on Assets (ROA) and Return on Equity (ROE) are enhanced because the numerator, net income, increases due to lower costs and higher sales,  and the denominator, assets or equity, is reduced  through a reduction in inventory.  However, return enhancement may be dampened by the potential costs of inventory management technology and the inability of firms to redeploy assets.  For example, profitability does not increase if the cost of implementing an inventory management system outweighs its costs savings. Likewise, a reduction in inventory does not result in a decrease in assets per unit sales if the assets are not productively redeployed.  The Inventory to Sales (IS) ratio is a common measure of the efficiency in which firms use inventories. Increases in operational technology lead to lower  inventory levels necessary to support a given level of sales, resulting in a lower IS ratio.  Although some  studies have noted the surprisingly small decrease in aggregate inventory levels given the innovations in operations and distribution (Hirsch 1996, Blinder and Maccini 1991), Irvine (2003) shows that both the aggregate IS ratio and the industry-specific IS ratios declined during the 1990s. Industry-specific IS ratios registered larger declines than the aggregate IS ratio because of the increasing importance of industries with relatively high IS ratios. In particular, Irvine (2003) found that since 1990 durable goods, relative to non-durables, constitute a larger portion of manufacturing, wholesale trade, and retail trade.  Since the inventory to sales ratio is higher for durables than for non-durables, growth in durables mitigated the reduction in the overall IS ratio, even though IS ratios in all sectors declined.  Several studies have explored the link between reduced inventory requirements and financial performance.  Balakrishnan et al. (1996) show that firms implementing just-in-time (JIT) production systems may not necessarily increase ROA because reduced inventory requirements do not necessarily translate into reduced asset requirements.  Assets freed up by lower inventory requirements are absorbed into other accounts. Balakrishnan et al.  find that the degree to which firms capitalize on the inventory reducing effects of JIT depends upon customer concentration and cost structure of the industry.   A subsequent study by Biggert and Gargeya (2002) finds that large firms adopting JIT reduce their inventory requirements primarily through reductions in raw material requirements, not through work-in-progress or finished goods inventory. Biggert and Gargeya speculate that firms are able to reduce raw material inventories by pressuring suppliers, but that firms are less successful in reducing inventories when changes need to be implemented by their own organizations or customers. This study builds upon the results of Balakrishnan et al. (1996) and Biggert and Gargeya  (2002) by testing the macroeconomic relationship between inventory reductions and increased cash balances. If reduced inventory results in larger cash balances, the financial benefit from increased inventory efficiency is reduced.   Figure 1 provides a numeric example to illustrate this concept. Before a new inventory management system is implemented, a firm has inventory of $600 and a cash balance of $200.  Improvements in distribution decrease the inventory requirement to $400 but the reduction in inventory results in an increase in cash balances, leaving total assets unchanged. If the new system did not affect costs or revenue, ROA would remain the same since a reduction in one current asset was offset by an increase in another.  The following section reviews research on the impact that management discretion has on cash balances.  An extensive literature addresses corporate cash holdings.  A classic reference is Keynes (1934). He identifies the transaction and precautionary benefits of holding cash.  Firms reduce their transaction cost of raising funds by holding cash; assets do not need to be liquidated in order to make payments. The precautionary benefit of cash is due to the buffer that cash holdings provide the firm during adverse conditions. A firm with cash holdings will be able to finance activities if other sources of funding are unavailable or exceedingly expensive. Firms with low cash balances may be unable to raise capital when faced with adverse conditions. 

 

Teaching Customer Value Analysis to Business School Students

Dr. Gene Milbourn, Jr., University of Baltimore, MD

 

ABSTRACT

This paper will provide an outline on structuring a consulting project for business school students on the topic of Customer Value Analysis (CVA).  It will suggest a step-by-step program providing students with a methodology whereby they can manage “perceive” quality rather than relying on the traditional conformance criteria.  Specifically, the paper will assist students in identifying what quality is to customers; which competitors are performing best on each aspect of quality; and, how customers select among competing suppliers.  The seminal work of Buzzell and Gale (1987) and Gale (1994) on linking customer perceptions of quality to market performance is featured.  During the late 1980s and into the 1990s, businesses around the world added capacity in response to increased demand.  This movement inevitably led to increased competition to improved quality in both products and services through improved organizational processes.   Programs such as reengineering, TQM, just-in-time inventory control, quality circles, and work process engineering were popular and had a positive impact on organizational functioning.  Tom Peters (1982; 1985) contributed for more than a decade improving quality in the service area through his many books including In Search of Excellence and A Passion for Excellence. Scholars and practitioners are indebted to Deming (1986) for his 14-point program for transforming organizations and to Garvin (1988) for this work identifying the eight dimensions of quality.  Parasuraman, Zeithaml, and Barry (1986) have identified the factors which are critical to outstanding services quality.  Exhibits 1, 2, and 3 in the Appendix show the factors which were important to these researchers. However stellar the above contributions were to the quality movement, none address the main concerns of CVA--improving market position relative to that of key competitors through providing increased customer value. Often quality management programs do not to take into account the quality of service provided, the likelihood of flawed original specifications, and, --most importantly—the nature of buyers who instinctively compare products and services.  All of these negate the efficacy of static conformance standards in solely managing our quality programs and argue for a more dynamic approach.  While the customer value concept is central to the marketing concept, customer value research is in its initial stages with early work provided by Band (1991), Ullage (2003), Woodruff (1997), and Bowman and Ambrosini (1998).  In general, the thrust has been toward a more market-oriented organization which can become profitable through providing value to the customer through its core processes.  Naumann (1995) and others (e. g.,  Patterson and Spreng, 1997) advance an argument that in the future a company will to develop a competitive advantage based on providing a “customer value triad’ consisting  of a product quality, service quality and value-based prices. Customer value analysis (CVA) involves a structural analysis of the antecedent factors of perceived value (usually perceived quality and price) to assess their relative importance in the perceptions of buyers.   The process will (1) show the quality attributes that matter to customers of a business and to their rivals; (2) show exactly how customers define these quality attributes; (3) identify which competitors have superior values on each quality attribute; and (4) help provide a fact-based foundation on which to design improvement strategies.  A positive customer value relative to rival firms predicts a greater change in market share, a greater return on sales, greater return on investment (Gale, 1994).  Gale’s book, Managing Customer Value was reviewed as  “. . .arguably the most useful marketing study since the formative works of Peter Drucker, Philip Kotler, and Michael Porter.”  (Publishers Weekly, 1994). Teams of students are asked to create focus groups of four to six customers to identify what quality really is in their marketplace.  This procedure will identify why customers patronize their subject business as well as rival businesses. The customers selected for focus group participation must be experienced buyer, in that, they should have experience buying from the subject business as well as from the rival businesses.  Table 1 shows a template to be used to collect raw data from customers in the focus groups.  Quality attributes generated by the focus group participants are listed in column 1. Column 2 will contain the importance of each quality attribute to the focus.  The total must be 100%.    Columns 3, 4, and 5 contain the evaluation of the focus group of the rating of the subject business and two rivals on each quality attribute based on a 1-10 scale.  The averages of the rival’s ratings are listed in column 6.  Column 7 contains the ratios for the subject business which is calculated by dividing the rating by the average of the rivals.  The market perceived quality ratio is calculated by summing the ratios in column 8. Basically, focus group members are asked what non-price attributes are important in their buying decisions.   The participants may mention such factors as warranty coverage, repair and maintenance records, sales service, customization, technical support, location, complaint handling, ordering and billing simplicity, delivery practices, customer service, and, other aspects of quality service.  Do not include costs at this stage.  In a university setting, non-price quality attributes that affect buying (enrollment) decisions typically include placement reputation, brand image, alumni network, specializations, teaching, available technology, and the availability of night classes. This list is merely suggestive.   Once the focus group agrees on a list of attributes, the participants distribute 100 points among the attributes indicating their relative importance.  The customers are then asked to rate on a 1 to 10 scale the performance of their business and each rival on each factor.  Rivals should include the most important direct competitors—the fastest growing or the most innovative.  Multiply each business’s score on each factor by the weight of that factor; add the results to get a “Market-Perceived Quality Ratio.”  This ratio summarizes, in percentage terms, how much better or worse, a business product or service is in the minds of the customer compared to rivals.  It details the quality attributes or business processes where one business is superior or inferior to important rivals, thereby, providing a good starting point for change strategies.  

 

Standby Letters of Credit and Loan Sales; Joint Products?

Dr. Vassilios N. Gargalas, Herbert H. Lehman College, Bronx, NY

 

ABSTRACT

In their traditional function, commercial banks make loan sales with recourse to free capital while they evaluate the credit-worthiness of their clients and also take a position of risk. However, government regulations prohibit banks from taking full advantage of this activity. Loan sales with recourse are treated as on balance sheet items, which requires additional reserves with the Fed, higher FDIC premiums, as well as increased capital requirements. At the same time, SLCs (standby letters of credit) and loan sales without recourse are off balance sheet items not subject to the above regulations. Using the time-state preference model, this paper shows that the cash flow structures of loan sales with recourse can be replicated by portfolios consisting of SLCs and loan sales without recourse. In this fashion, SLC and loan sales without recourse become joint or complimentary activities. Since these portfolios are in principle less expensive, one will expect banks to substitute these portfolios for loan sales with recourse. Bank managers that are aware of the above relationship can make a more efficient use of bank resources. In the last two and a half decades loan sales experienced an unprecedented increase. In 2004, the volume of loan sales reached 260.4 billion dollars and the volume of SLCs 210.6 billion dollars. This increase raised the question among financial economists of whether traditional banking was being transformed into something new. Indeed, loan sales with recourse, as explained below, can be easily seen as instruments of traditional banking, since banks not only perform the credit analysis but also undertake the risk of lending. However, if one takes a closer look at the data, will realize that the vast majority of loan sales are made without recourse. This, in turn, makes banks simple loan brokers and as such they would deviate from their traditional role. In this paper, we will deal with the above issue by engaging two of the most prevalent off-balance sheet instruments, standby letters of credit and loan sales, and we argue that banks remain the traditional houses we’ve come to know. The explanation of the above phenomenon lies into the parallel increase of standby letter of credit volume. The model we develop demonstrates that banks create synthetic loan sales with recourse by combining portfolios of loan sales without recourse with portfolios of standby letters of credit. In doing so, banks avoid the “regulatory taxes,” to be explained below. In this fashion, banks, in an indirect way, still deal in loan sales with recourse and, therefore, their activities remain within their traditional scope. The implication for banks is that once this relationship is clear, bank managers can actually pursue and implement the strategy rather than letting it occur passively. In the process, banks will become more efficient and adapted in the challenging financial environment of our days. Loan sales may be made without recourse or with recourse. Loans sold without recourse are originated by the bank and then, like any other loan, are entered on the bank's books. At the second stage, a buyer is found, the loan is sold, and subsequently, removed from the bank's books. From this point on, the bank maintains no responsibility for the loan. Should the loan default, the bank is under no obligation to indemnify the buyer. In most cases, the bank will continue to service and monitor the loan for a fee. In the case of a loan sold with recourse, the loan is originated by the bank, it is entered on the bank's books, and it is subsequently sold to a third party. All along, the bank maintains a contractual obligation to buy the loan back if it defaults or declines in quality. The bank will usually service and monitor the loan after it is sold. Unlike loans sold without recourse, loans sold with recourse are not removed from the bank's books. For the purposes of Regulation D, in the sale of a loan subject to an unconditional agreement to repurchase, the bank is considered to be the borrower of the proceeds from the loan sale. Goldberg and Lloyd-Davies (1985) conclude that standby letters of credit have no impact on overall bank riskiness. Pavel (1988,) investigates loan sales without recourse, the other off-balance sheet instrument that will concern us, and concludes that they have no impact on overall bank riskiness. Benveniste and Berger (1987) offer a comparison between securitized assets that pay off the securitized lender first, and multi-class securities with sequential claims, that are issued against the same collateral pool (practice not permitted to commercial banks). The definition of securitization in the article is very broad, and encompasses loan sales, standby letters of credit, etc. According to the model, the payoffs, as well as the risk sharing achieved by securitization, is similar to those achieved by sequential claims. Pennacchi (1987), in his article on the profitability of loan sales based on a theoretical model concludes that it is profitable for banks to sell loans because banks have credit analysis ability that is superior to the public, as well as a higher cost of capital than other non-regulated institutions. Pavel (1988) suggests that there are three purposes for loan sales. The first one is funding; that is, some banks may not want to keep a loan on their books. For example, the loan may be less risky than the bank itself, but the bank may still want to originate the loan in order to maintain good relations with the client. The empirical results in that paper indicate that there is a statistically significant difference between the ratio of loan sales to assets for the thirty riskiest banks when compared to the analogous ratio of the thirty least risky banks for 1985. The difference in the change of the risk of the two groups between two consecutive years is not statistically significant, indicating that funding is a reason for selling loans. The strategy of using loan sales as a funding device seems to have little impact on bank risk. Loan sales have been identified as a means to alter the diversification of the bank's loan portfolio. Pavel concludes that the banks that were least diversified in 1984 sold more than twice as many loans (as a percentage of assets) in 1985 than bank holding companies that were the most diversified in 1984. The difference was statistically significant. As before, however, the change in the riskiness of the two was not statistically significant. Capital constraints may be another reason for loan sales. Firms that increased their primary capital ratio over the 1984-85 period, were compared to those of bank holding companies that decreased their capital ratio. The difference between the two groups, in loans sold, was not statistically significant. Loan sales do not seem to be used by banks in order to increase their primary capital ratio. Even if one is willing to assume that loan sales are used by banks in order to increase their primary ratios, according to empirical tests banks do not alter the their riskiness any more than bank holding companies that increase their primary capital ratios, through some other means. There are three types of regulatory taxes, reserve requirements, capital requirements, and FDIC premiums. According to Regulation D when a loan is sold with recourse, the proceeds of the sale remain on the bank’s books and are treated as deposits.

 

The Connecticut State Income Tax: Progressive, Regressive and Proportional

Dr. Gary M. Crakes, Southern Connecticut State University, CT

Dr. Melville T. Cottrill, Southern Connecticut State University, CT

 

ABSTRACT

The State of Connecticut enacted legislation in 1991 to establish a state income tax on wage and salary income.  Over the past sixteen years it has been characterized as a progressive income tax up to the point of the threshold income levels where the maximum tax rate applies, and proportional thereafter.  However, when effective marginal tax rates are analyzed a more complicated pattern emerges revealing fluctuating ranges where the Connecticut state income tax is proportional, progressive and regressive. Since the Connecticut State income tax on wage and salary income was implemented in 1991, it has been identified frequently as both a proportional and a progressive income tax; proportional, since the tax rate is constant at 5.00% above threshold levels of income when exemptions and credits disappear, progressive, since some analysis has shown that the top 50% of filers account for 86.50% of all income and pay 95.80% of the income tax. (1, 2)  In fact, it can be demonstrated that over some ranges of income the Connecticut state income tax is actually regressive.  The purpose of this paper is to present the effective marginal tax rates under the Connecticut state income tax and to discuss how this rate structure performs under the interpretation of the equal sacrifice rules of tax equity. John Stuart Mill was the first to discuss the equity issue of taxation in terms of an equal sacrifice prescription.  Taxpayers are considered to receive equal treatment if their tax payments involve an equal sacrifice or loss of welfare. (3)  Assuming that welfare is a function of income, that sacrifice is measured in terms of the loss of utility which is associated with the loss of income paid in tax.  Two important criteria exist for evaluating equity.  The first, horizontal equity, maintains that equal sacrifice will be attained if individuals of equal taxpaying ability are taxed equally.  This criteria is consistent with the principle of equality under the law.  The second, vertical equity, states that to equalize sacrifice, individuals with unequal taxpaying ability should be taxed unequally.  This criteria is also consistent with the legal principle of equal treatment, but is based on the premise that the tax burden is measured in something other than dollars, namely the utility associated with those dollars.  Therefore, under the equal sacrifice rule and vertical equity, the tax burden of individuals with dissimilar incomes will be equalized by paying different dollar amounts of tax but equal amounts of utility. Application of the criteria of vertical equity typically requires both the acceptance of the principle of diminishing marginal utility of income and the willingness to make interpersonal comparisons of utility.  Under the criteria of vertical equity the determination of the appropriate income tax rate structure is, in theory, dependent upon the rate at which marginal utility of income diminishes.  It is frequently assumed that only progressive taxes satisfy the aforementioned criteria.  Actually, despite perceptions to the contrary, progressive, proportional, and regressive taxes are all potentially consistent with the equal sacrifice rule and vertical equity.  If the rate of decline of marginal utility of income is equal to the rate of increase of income, then a proportional tax satisfies the equal sacrifice rule.  If the marginal utility of income decreases at a more rapid rate than the rate of increase in income, then a progressive tax is appropriate.  And if the marginal utility of income declines at a slower rate than the rate of increase in income, a regressive tax is consistent with application of the equal sacrifice rule and vertical equity. (4) Whatever assumptions are made with respect to the rate of decline in the marginal utility of income, it should be consistent throughout the application of marginal tax rates over all ranges of income.  If not consistent, it should at least somehow reflect the marginal utility of income characteristics of the taxpaying population.  If a proportional tax is selected, then marginal tax rates should be constant as income rises.  If a progressive tax is instituted, then marginal tax rates should be rising as income does.  If a regressive tax is implemented, then marginal tax rates should be declining as income rises.

 

What Local Responsiveness Really Means to Multinational Corporations

Dr. Stephanie Hurt, Meredith College, Raleigh, NC

 

Abstract

The concept of local responsiveness offers a potentially effective lens through which to view the internationalization process of firms and the management of multinational corporations (MNCs). However, the last two decades of business internationalization suggests that the concept of local responsiveness as it has usually been applied does not help us understand the difficulties of firms’ forays into foreign waters. We feel this is due to an insufficient consideration of the difficulty and importance of transferring managerial practices during the internationalization process. In this paper, we attempt to re-invigorate the concept of local responsiveness by demonstrating how the concept has been too narrowly interpreted in terms of product/market similarities and suggesting that true responsiveness should include adaptation to the very different mindsets of the host country nationals to be managed.  he global integration-local responsiveness model (GI-LR) (Prahalad & Doz, 1987; Barlett & Ghoshal, 1989) has been a very influential framework dealing with international business strategy and management relationships and control within MNCs. Essentially, it points out that MNCs need to organize and manage in light of the importance of integrating their operations for global efficiency and the importance of responding to the various local environments in which they operate. Barlett and Ghoshal (1989) extended this approach to proposing a typology of international strategies to follow and of MNC organization: a global strategy, a transnational strategy and a multi-domestic strategy. The framework is depicted graphically in Figure 1. Research off-shoots have largely concentrated on issues of strategy formulation and management, organizational structure and subsidiary characteristics in general. Given the original bias on strategy formulation and management as well as organizational structure, much of the literature has focused on top managers (Murtha, 1998). This was favored by Barlett and Ghoshal themselves from the outset (1990). Other off-shoots have focused on relative degrees of subsidiary independence (Harzing, 2000), including subsidiary innovation capabilities and entrepreneurship (Birkinshaw, 1997).  As research progresses, the framework is not without its critics. It has been called ambiguous and the evidence concerning the relationship between strategy, environment and the performance of the MNC inconclusive (Asmussen). Barlett and Ghoshal’s typology of MNCs which came out of the framework has also been criticized; some suggest that the ‘transnational solution’ has proved to be suitable for a very few, special MNCs (Rugman, 2005). We believe that the framework has helped direct internationalization research in new ways and thus led to asking questions about the parameters of managing international businesses that might not have been posed otherwise. We regret, however, that research into local responsiveness has taken such a strong market and marketing direction, something that was indeed emphasized in Barlett and Ghoshal’s original work (1989). Local responsiveness was considered largely in terms of the need to adapt products to local markets; it was largely a product/market model, which led managers to think about whether their industries were really global in the sense that their products could be produced and sold the same way worldwide. This led to issues of how resources (primarily development and production resources) could be managed most effectively and what latitude they could give to their subsidiaries. Posing the old question about multi-domestic strategy applicability, Barlett & Ghoshal suggested an organizational and management typology for product/market compatibility that required adaptation to local conditions. In essence, we always knew that consumer electronics was not soap or food, but that this fact should influence management relations and control was relatively new. Organizational knowledge of products and markets became an asset that needed to be deployed one way or the other, and generalized in the firm. The question then became who should transfer knowledge and capabilities and how. However, the framework did not necessarily prepare firms to manage local managers and workers having different mindsets from those of the MNC home office; the focus of the framework had always been on the mindsets of the MNC managers—and that at the top levels. Certainly, MNCs had often produced in the Third World, and so the problems with local production and office employees had been faced before; however, the concept of local responsiveness did not really cover these issues. When internationalizing into Central Europe and China in the 90s, relations with employees and local managers were difficult, surprising and had to be dealt with. Many of these contacts between MNCs and local managers and employees in these regions took place through joint ventures, often a preferred entry method.  The work attitudes and professional mindsets of managers and workers in these countries became a new focus for researchers as well as the absorptive capacity of MNC subsidiaries (Child, 1991; Child & Markoczy, 1993; Child & Czedledy, 1996; Bjorkman, 2002; Fey, Nordahl & Zatterstrom, 1999; Minbaeva, Pedersen, Bjorkman, Fey, & Park, 2003.). Ways of managing in ‘transitioning economies’, later called ‘transforming economies’, were not included in the concept of ‘local responsiveness’. Host country worker attitudes, managerial practices and routines, all embedded in a socio-economic context different from the MNCs, fall outside the framework. Now, there is a small, but growing, body of research that is integrating HRM into the GI-LR framework (Caligiuri & Stroh, 1995; Lu & Bjorkman, 1997; Graham & Trevor, 2000), but for the most part, issues that deal with responsiveness to local work attitudes and the applicability of MNC practices in host countries have not been sufficiently examined. This lack becomes particularly bothersome when dealing with the internationalization of service industries with their human resource intensity, so dependent upon people to ensure delivery of their offerings. The difficulties undergone by retailers and Disney theme parks in internationalization bear witness to the fact that for these firms local responsiveness did not include the management of host country personnel, whereas their managerial practices and routines at this level were a crucial part of their business model.

 

Modeling Purchasing Power Parity Using Co-Integration: Evidence from Turkey

Dr. Cem Saatcioglu, Istanbul University, Istanbul, Turkey

H. Levent Korap, Marmara University, Istanbul, Turkey

Dr. Ara G. Volkan, Florida Gulf Coast University, Fort Myers, FL

 

ABSTRACT

In this study, we construct a co-integration model of the Turkish economy using high frequency data to examine the validity of the purchasing power parity (PPP) theory. The ex-post estimation results derived from the analysis of monthly observations for the January 1987 – December 2004 period generally support the use of the PPP theory in predicting the movement of currency values in the Turkish economy. The methodology developed in this study can be used in other countries to ensure the success of economic policies that depend on the existence of PPP relationships.  During the 1990s, the Turkish economy endured a highly unstable growth performance with chronic double-digit inflation that impacted the course of many domestic macroeconomic aggregates (Ertuðrul and Selçuk, 2001: 13-40; Korap, 2006; and Saatçioðlu and Korap, 2006). Figure 1 below indicates that, beginning in 1989 when capital account liberalization was completed, to 1999, large differences between the domestic and foreign inflation rates existed, along with parallel movements between currency depreciation rates and domestic inflation. Given the observed behavior of the data in Figure 1 for DEPRECIATION (the annualized depreciation rate of nominal exchange rate of Turkish Lira (TL) / US$) versus the DOMESTICINF [the annualized inflation rate based on consumer the price index (CPI)] and WORLDINF (the representative annualized CPI-based world inflation) an argument can be made for the existence of PPP relationships in the Turkish economy. The data for domestic variables originate from the electronic data delivery system of the Central Bank of the Republic of Turkey (CBRT) and the sources for world price level and inflation are the IMF-IFS CD-ROM data base. Given the behavior of the variables in Figure 1, it is warranted to examine whether PPP relationships hold in the Turkish economy. To empirically analyze this proposition, we apply contemporaneous estimation techniques If the results of our analyses are positive, policy makers can be confident of the success of  economic programs they devise to manage domestic currency values and domestic inflation. In the following sections an economic model is developed and used to empirically reveal the existence of PPP relationships in the Turkish economy. Finally, we present our conclusions and discuss future research opportunities. We start by relating the PPP relationships to the law of one price. Froot and Rogoff (1994) express that for any good i,  where ptd is the log of the domestic currency price of good i, ptf is the analogous foreign-currency price, and et is the log of the domestic currency price of foreign exchange, ensuring identical prices of unfettered trade in goods. Letting equation (1) hold for every individual good would lead to the assumption that it must hold for any identical basket of goods. Even if the law of one price fails for individual goods, it is possible that the deviations cancel out when averaged across a basket of goods (Froot and Rogoff, 1994). Moreover, adopting an international perspective generally allows the use of price indices from different countries with varying weights and mixes of goods across these countries, rather than using identical baskets, as hypothesized in the PPP theory. Finally, international price indices for identical baskets of goods can still be constructed for this purpose. Instead of using the absolute form of the PPP relationships in equation (1), we can develop a weak form of the PPP theory. Following Salvatore (1998: 466), the absolute PPP theory would give the exchange rate that equilibrates trade in goods and services while completely disregarding the capital account. A nation experiencing capital outflows would have a deficit in its balance of payments, while a nation receiving capital inflows would have a surplus if the exchange rate was the one that equilibrated international trade in goods and services. A second objection to the absolute PPP theorem comes from the assumption that this version of the PPP would not give the exchange rate that equilibrates trade in goods and services because of the existence of many non-traded goods and services whose prices in part depend on relative productivity levels (Jonsson, 2001: 247). Considering these deficiencies, the relative or weak form of the PPP can be suggested to analyze the theory where the change in the exchange rate over a period of time would be proportional to the relative changes in the price levels in different countries over the same time period. In this sense, Taylor and Taylor (2004) express that relative PPP would hold if the absolute PPP holds, but the absolute PPP does not necessarily hold when the relative PPP holds, since it is possible that common changes in nominal exchange rates may be happening at different levels of purchasing power of the currencies examined.  In addition, we can consider the pricing to market theory of Dornbusch (1985) and Krugman (1986) that examines why the import prices fail to fall in proportion to the exchange rate appreciation. The pricing to market theory emphasizes that, due to the imperfect competition problems, there is a price stickiness phenomenon in international trade. With constant elasticity of demand, producers who are monopolists or oligopolists working under imperfect competition conditions may charge different prices in different countries, while exchange rate changes would not cause fluctuations in relative prices charged (Obstfeld and Rogoff, 2000). This is possible because there are many industries that can supply separate licenses for the sale of their goods at home and abroad (Sarno and Taylor, 2002: 70). Taylor (2000) considers methodological problems in prior studies such as employing low frequency data and linear model specification. As a result, such studies do not empirically support the PPP theory since such specification problems can lead to bias towards findings of slow convergence of real exchange rates to the long run equilibrium. Thus, the existence of a long run equilibrium relationship between the domestic price level, the nominal (spot price of) exchange rate, and the foreign price level, all expressed in logarithms and with statistically significant a priori signs, would give support to the absolute PPP theory. Froot and Rogoff (1994) and Taylor (1996) emphasize that an obvious problem with equation (1) above is that exchange rates and prices might reasonably be considered endogenous and are simultaneously determined, and so there is no compelling reason to put exchange rates on the left hand side, rather than vice-versa. In this sense, single equation results may be seriously misleading due to a simultaneity bias and/or invalid conditioning (Gökcan and Özmen, 2001). The other requires that contemporaneous time series estimation techniques be employed to test the PPP hypothesis, considering integration properties of the relevant variables to search for a valid long run relationship constructed by means of economic theory.

 

Do Investors Over- Or Under-React? Evidence from Hong Kong Stock Market

Jianzhou Zhu, University of Wisconsin - Whitewater

 

ABSTRACT

Using daily data on Hang Seng Index over the sample period of December 31st, 1986 to October 6th, 2006, we investigate the behavior of Hong Kong stock market following extraordinary price movements in a single trading day. We find evidence that investors in Hong Kong stock market tend to underreact to good news and overreact to bad news. The finding is consistent with the uncertain information hypothesis which states that investors tend to err on the side of caution when they are uncertain about the information they received. This behavioral tendency on the part of investors causes stock prices overshoot at arrivals of bad news and undershoot at the arrivals of good news; a phenomenon more likely to be observed in markets where quality of information is generally poor and lack of precision.  In this study we examine the behavior of Hong Kong stock market returns following extraordinary price movements in a single trading day. Efficient market hypothesis posits that investors respond rationally to news arrivals and all relevant information is incorporated into stock prices fully and rapidly as it becomes publicly available. Adjustment of prices from one equilibrium fundamental value to another is accomplished in one single movement and leaves no opportunity for abnormal returns based on publicly available information. According to this hypothesis, price changes of any magnitude would not generate any predictable patterns of equity returns in either immediate or distant future. Recent developments of behavioral finance, however, have given rise to alternative hypotheses about behavior of stock returns that are contradictory to the efficient market hypothesis. For example, the overreaction hypothesis states that investors tend to overreact to new information and generate price movements beyond the new equilibrium level justified by the news. As investors realize later that they have overreacted to the information and take correction actions, price changes in the opposite direction of the initial movement will be observed. On the other hand, the under-reaction hypothesis argues that investors consistently overweight their prior beliefs and thereby under-react to new information. Price adjustments from one equilibrium fundamental value to another tend to be accomplished through a series of smaller movements in the same direction rather than one single movement as described in efficient market hypothesis. Both overreaction and under-reaction hypotheses imply a predictable pattern of stock returns following initial price reaction to new information. The purpose of this paper is to empirically determine which of the above hypotheses provides best description of the behavior of Hong Kong stock market returns. In their seminal study, De Bondt and Thaler (1985) advance the overreaction hypothesis and test its empirical validity using monthly return of common stocks listed in New York Stock Exchange during the period between January 1926 and December 1982. The authors rank the stocks based on their three-year market adjusted excess return. The 35 stocks with the largest positive excess returns are assigned to the winner portfolio and the 35 stocks with the largest negative excess returns are assigned to the loser portfolio. They then track the excess return of both portfolios over the three-year period after the portfolio formation and find evidence consistent with the overreaction hypothesis. The loser portfolio of 35 stocks outperforms the market by 19.6%. The winner portfolio, on the other hand, earns about 5% less than the market. The difference of excess return between the two extreme portfolios equals 24.6% with a t-statistic of 2.2. Further more, as the authors lengthen the portfolio formation period to obtain more extreme initial excess returns, the subsequent price reversal during the testing period becomes more pronounced. On the other hand, as the authors shorten the portfolio formation period to obtain less extreme initial excess returns, the subsequent price reversals during the testing period become less pronounced. In his comments on De Bondt and Thaler (1985) study, Bernstein (1985) agrees that the long term price overshoot and subsequent long term reversals indicate the long run inefficiency of stock market. But he argues that the stock market is highly efficient in incorporating relevant information into stock prices in the short run. Zarowin (1989), however, find evidence of stock market overreaction in the short run. Zarowin ranks common stocks according to their performance during a given month, and finds that in the subsequent month a portfolio of the past month’s losers outperforms a portfolio of the past month’s winners by 2.5% with a t-statistic of 10.54. He concludes that the market is weak form inefficient even in the short run. Atkins and Dyl (1990) exam the short run overreaction by examining the behavior of common stock prices following an extreme price change during a single trading day. Their evidence shows that stock investors do overreact at news arrivals, especially to negative information. But the magnitude of the overreaction, while statistically significant, is small compared to the bid-ask spreads observed for the sample stocks. Thus, they find no evidence that the stock market is inefficient after transaction costs are considered. In another study, Zarowin (1990) reexamine De Bondt and Thaler’s evidence on stock market overreaction, controlling for size differences between winners and losers. He finds that losers are usually smaller than winners. He then performs two sets of tests to examine the role of firm size in the overreaction phenomenon. First, by matching subgroups of winners and losers of equal size, he finds that all return discrepancies, except those in January, are eliminated. This suggests that an effect other than overreaction, such as tax loss selling phenomenon, may be at work. Second, he performs separate analyses on periods when losers are smaller than winners and on periods when winners are smaller than losers. When losers are smaller, they outperform winners; when winners are smaller, they outperform losers. The tendency for losers to be smaller than winners, therefore, appears to be responsible for the overreaction phenomenon. Thus, his results show that different size, not investor overreaction, is driving the winner versus loser phenomenon; and that a widely regarded efficient market anomaly is subsumed by the size and seasonal phenomenon. Studies mentioned above test the overreaction hypothesis using returns on individual stocks. As such, their results are subject to cross-sectional differences, such as bid-ask spread, firm size, infrequent trading, ownership structure, and other firm-specific factors that may explain the observed price reversals. For example, Cox and Peterson (1994) examine short-term stock return behavior following one-day price decline of 10% or more based on bid-ask bounce, market liquidity, and overreaction. They do not find evidence consistent with the overreaction hypothesis and observe that stocks with large one-day price declines seem to perform poorly in the subsequent period. They conclude that most of the reversals are due to bid-ask spread and market liquidity. Chopra, Lakonishok, and Ritter (1992) perform a comprehensive evaluation of the overreaction hypothesis. They use the empirically determined price of beta risk and calculate abnormal returns using a comprehensive adjustment for price. They find an economically significant overreaction effect, which cannot be attributed to size or beta. Since the overreaction effect reported in their study is much more pronounced for smaller firms than for larger firms, they hypothesize that individuals, the predominant holders of stock of small firms, may overreact, while the dominant holders of large stocks, namely institutions, do not. In this study, we investigate whether investors in Hong Kong stock market appropriately, over-, or under-react to unanticipated and dramatic news by examining the behavior of stock market returns subsequent to extreme price movements in a single trading day. We assume that one-day extreme price changes are reflections of arrivals of unanticipated and dramatic information. If the information is appropriately incorporated into the stock prices during the day it is become publicly available, we would expect no abnormal returns during the subsequent period and the market efficient hypothesis would be confirmed. If stock prices overshoot during the days of news arrivals, we would expect price reversals during the subsequent period and the overreaction hypothesis would be confirmed. On the other hand, if stock prices undershoot during the days of news arrivals, we would expect price continuations during the subsequent period and the under-reaction hypothesis would be confirmed. To avoid confounding effects of cross-sectional differences in individual stocks, we use abnormal returns on market index rather than abnormal returns on individual stocks in our data analysis. We rank all trading days within the sample period based on their abnormal returns. The trading days with the largest positive abnormal returns are designated as the “best performing days” while the trading days with the largest negative abnormal returns are designated as the “worst performing days”. Then the abnormal returns following each of the best and worst performing days are calculated and compared to identify any over- or under-reactions. We find evidence that investors in Hong Kong stock market overreact to unanticipated and dramatic negative news but under-react to unanticipated and dramatic positive news, although our evidence for the over-reaction to negative news is relatively weak compared to that for the under-reaction to positive news.  The remaining of this paper is organized as follows. Section II describes the data and testing procedure. Section III reports and interprets the empirical findings. Section IV concludes.

 

The Classification and Evaluation Model for Product Bundles

Dr. Tsuen-Ho Hsu, National Kaohsiung First University of Science and Technology, Taiwan

Kuei-Feng Chang, National Kaohsiung First University of Science and Technology, Taiwan

 

ABSTRACT

In the previous literatures on bundles, most scholars focused on the pricing strategies of product bundles. However, the characteristics of bundles and their consumer evaluation factors were seldom mentioned. This study based on literature review to classify four types of product bundle (Integrated, Co-existing, Conceptual and Random) by two dimensions: degree of functional integrity and degree of symbolic increase. Besides above, this study start with consumers’ satisfaction to find those factors which influence purchasing and construct a product evaluation model for bundles. Finally, this study utilizes the above results and the complementary relationships shown by Oxenfeldt (1966) to explain four different types of bundle and submits the implications of their marketing. Bundling of products is widely practiced in today’s marketplace. Marketers utilize the joint pricing for the sale of two or more products and/or services in a single package (Guiltina, 1987; Kaicker et al., 1995; Stremersch and Tellis, 2002). Based on an economic viewpoint, the research focus on bundling has concentrated on the pricing strategy of bundling in most of the previous literature (e.g. Guiltina, 1987; Venkatesh and Mahajan, 1993; Johnson et al., 1999; Soman and Gourville, 2001; Chung and Rao, 2003; Janiszewski and Marcus, 2004). From an economic principle, if two products and /or services have a complementary relationship, the reservation price (the maximum amounts buyers are willing to pay) for a bundle may exceed the sum of the reservation prices for the individual component and produce a high consumer surplus (the amount by which the individual’s reservation price exceeds the actual price paid) for consumers (Adams and Yellen, 1976; Telser, 1979; Guiltina, 1987). On the other hand, Harlam et al. (1995) utilized the value function of prospect theory (Kahneman and Tversky, 1979; Thaler, 1985) to examine how consumer evaluate the outcomes of component as well as bundle pricing and make a purchase choice. They found bundles composed of complements have a higher purchase intention than unrelated components. Besides the above, Yadav and Monroe (1993) basing their views on transaction utility theory considered consumers’ perception savings when they evaluated a bundle offer. The mixed-bundling strategy was utilized, in the research of Yadav and Monroe (1993); consumers could buy either bundle or the component products. They found that consumers gained transaction utility from discounts associated with the component products in a bundle plus any discount associated with the bundle. Based on the above literature review, this study submits four questions. First, beside complementary relationship, does any other relationship between components exist in a bundle? Second, besides price factor, do any other factors influence consumers in their bundle purchasing? Third, based on the above questions, what is the evaluation framing structure for bundles? Last, besides the pricing of bundles, what is the marketing strategy for product bundles? Thus, the purpose of this study is outlined as follow: 1. Classifying product bundles, based on product benefit for consumers. 2. Finding the influencing factor on product bundle evaluation. 3. Framing the consumer evaluation structure. 4. Developing marketing implications for academics and practitioners. Guiltinan (1987) described the key to effective bundling as the degree of complementary among services or products in the bundle. Simonin and Ruth (1995) also indicated consumers’ perception of the degree to which the products in the bundle “fit” together are expected to play a key role in the evaluation of the bundle and its effects on price judgment. Mulhern and Leone (1991) and Harlam et al. (1995) observed complementary effects in their study. Bundles composed of complements will have higher purchase intentions than the bundles of unrelated products. To sum up the above scholars’ views, different types of bundling products will influence consumers’ evaluation and purchasing intention. However, how many types of bundling exist in bundling products? Besides the complementary one, are there any other relationships between products of bundle? In previous studies, many scholars based on rational model or “economic man” model and invest more research in bundling pricing strategy to maximize transaction utility. On the other hand, research about the type or image of whole product bundles is still limited. Simonin and Ruth (1995) utilized two dimensions—degree of product integration and degree of recognizability that the product bundles was divided into four types (Figure 1). However, that classification just describes the result of bundling, the implications of bundling products and the correlation with consumer outcome are still not clear. Hirschman and Holbrook (1982) indicated that the rational model does not capture the multisensory imagery, fantasy, fun and emotions associated with the consumption of some products. Park et al. (1986) noted that consumers' needs could be classified as being either symbolic or functional. They argued that functional needs are related to specific and practical consumption problems and symbolic needs are related to self-image and social identification. In the empirical study of Bhat and Reddy (1998), consumers do not have any trouble accepting brands that have both functional and symbolic appeal and could accept both functional and symbolic meaning at the same time. Thus, this study quotes the above points of view and utilizes another two dimensions—degree of functional integrity and degree of symbolic increase to classify bundling products (Figure 2 and Table 1). The definitions of four types of bundling products are shown as follow:  1. IntegratedAt least one of the component products can work by itself, when the joint component appears this increases the integrity of the product bundle for the consumer. 2. Co-existingAt least one of the component products can not work by itself unless a joint component appears that could develop common functional or instrumental targets. 3. ConceptualThe components may work independently; however, when the joint component appears this could increase or enhance mental satisfaction. 4. RandomThe components can work independently; the bundle just exists due to a derived demanding relationship between the products. Murphy and Enis (1986) indicated that consumers assess product satisfaction in terms of benefits expected minus costs incurred and argued the costs should be conceptualized on two independent dimensions—effort and risk. Satisfaction Benefits expected Costs incurred.   The effort is the amount of money, time, and energy that the buyer is willing to expend for acquiring a product. It is an objective measure of the value that consumer evaluates the product. Strategically, price bundling obviously benefits consumers by providing momentary savings (Yadav and Monroe, 1993); on the other hand, product bundling benefits consumers by reducing the time and cognitive effort required to make purchase decisions (Moriarty and Kosnik, 1989). Especially in practice, bundles are often offered at a discount of the summative price of the bundled components (Sarin et al., 2003). Consumers can pay less to get the bundled components at the same time. Thus, bundles can offer added value through the offer of integration of products in the bundle and/or through bundle discounts (Stremersch and Tellis, 2002). Based on the above points of view, through buying product bundles, consumers can reach the aim of time saving (reducing transaction costs) and/or money saving (optimizing consumer surplus), and let the overall effort of investment be minimized. Thus, this study addresses proposition 1, 2 and 3.

 

Relationships Among Service Orientation, Job Satisfaction, and Organizational Commitment in the International Tourist Hotel Industry

Yi-Jen Chen, Chaoyang University of Technology, Taichung, Taiwan

 

ABSTRACT

Research in service orientation, job satisfaction, and organization commitment suggests that service orientation is indispensable for the successful management of the service industry. Furthermore, job satisfaction among employees contributes to the enhancement of employees’ commitment to their organizations. This study used a questionnaire survey to investigate the relationships among service orientation, job satisfaction and organizational commitment of employees who have worked at least one year in international tourist hotels. Hotels in the tourist industry adequately represent the service industry in Taiwan. For this study, a total of 1,100 questionnaires were sent to major hotels in early June of 2005. The human resource departments of these hotels distributed the questionnaire to their employees. At the end of  December 2005, 350 responses were collected, accounting for 31.8%.of the total survey. The collected data were analyzed through LISREL (Linear Structural Relationships), and resulted in the following hypotheses: (1) service orientation is positively correlated with job satisfaction; (2) service orientation is positively correlated with organizational commitment; (3) job satisfaction is positively correlated with organizational commitment; (4) service orientation is positively correlated with organizational commitment due to job satisfaction. Implications of the study, drawn from a reflection of related theories and empirical studies, serve as a credible reference in future management practices of the international tourist hotel industry.  Data released by the World Tourism Organization (2004) show that in comparison with the growth ratio of tourism regions in 1984, the growth of tourism among Southeast and Northeast Asian regions was as high as 33%. Compared with the growth ratio below 20% in other areas, the achievement of East Asia is deemed significant. The World Tourism Organization, based on its 1995 data, predicted that by the year 2020, the number of global tourists will reach 1,561,100,000. Among them, the top three markets worldwide are: 717 million in Europe (45.9%), 397.2 million in the Asia-Pacific Rim (25.4%), and 282.3 million in the Americas (18.1%). In consideration of the government’s encouragement for the free visa measure, the active promotion of participation in World Trade Organization, the development of Asia-Pacific Operation Center, the successive completion of major domestic transportation construction projects, and the enforcement of the two-day weekend working week, the tourist hotel industry of Taiwan can reasonably reassure itself of a stable positive growth. Consequently, recreation market development is highly anticipated by the industry.  In the past 20 years Taiwan has earned a reputation as a world re-known “High-Tech Island.”   However, industries in recent years have gradually transplanted their operations in China under the impact of “magnetic absorption” accountable by low manufacturing costs and the sheer huge market in China. This move has caused a significant impact on the economy of Taiwan. In order to counter such an economic challenge, the Taiwanese government has revised its policy by promoting Taiwan both as a high-tech and a tourism island. In 2004, the Premier of Taiwan, Executive Yuan, announced that Taiwan would develop its tourism industry in fulfillment of its logo, “Visit Taiwan Year.” Actively promoting the “Doubling Tourists Arrival Plan,” the Island with a tremendous dollar reserve launched a movement to increase tourism.  The Premier himself instructed the Tourism Bureau of the Ministry of Transportation and Communications to strengthen international tourism publicity and to launch active promotion in major countries and regions like Japan, Korea, Hong Kong, Singapore, Europe, and America. He called for a serious commitment to develop new themes and promotional films in order to market Taiwan tourism effectively. These sales promotion efforts were aimed at increasing incentives and attracting more international tourists. According to Weng & Wang (2006), Taiwan must mold a new image as an “Island of Technology and Tourism.” To live up to this image, the Tourism Bureau of the Ministry of Transportation and Communications has to expend great efforts to promote Taiwan’s touristy activities pertaining to local attractions, and must carry out their plans to encourage private participation. The construction of recreational facilities, including tourist hotels, becomes critical in this entrepreneurship.  The official data supporting the government’s vision has shown positive trends. Since 2004, the number of tourists who have visited Taiwan has posted a positive growth of 10%-15%. As such, the international tourist hotels have played a vital role in stabilizing the demands of competitive and continuous growth. Even as Taiwan and mainland China remain in a politically edgy relationship, the opening of cross-strait tourism can be expected to provide significant benefits on the growth of Taiwan’s tourism revenues. China’s massive tourist population and its purchasing power alone contribute to this optimism. More specifically, international tourist hotels will directly benefit from this given situation.  According to the standards of buildings and equipments of Regulations Governing Management of Tourist Hotels, Taiwan’s tourist hotels consist of international tourist hotels and local tourist hotels. The latter, in general, cannot quite compete with the international tourist hotels due to constraints in operation scale, management efficiency, equipment and facilities, marketing capability, exposure, and reputation. At the beginning of 2006, the international tourist hotels of Taiwan reached 62 in number, and this number continues to climb. In 2005, the number of tourists visiting Taiwan reached 3.37 million. Among them, tourists from Japan alone accounted for 1.1 million, bringing in foreign currency revenues of about US$5 billion.  International tourist hotels constitute an industry with high client/customer and service access. This is where employee performance, including service attitudes, service quality, and service efficiency directly affect operation success (Lovelock,1996; Morey & Dittman, 1995). This phenomenon underscores the necessity of a “service orientation” in international tourist hotels.  Service orientation must rely on proactive actions. Employee training, motivation, and the redesigning of systems for human resource evaluation and general assessment  must revolve around a service orientation. This requires employees to be self-directive and self-motivated. Increased job satisfaction may be a secondary goal, but it is a critical one. Increased loyalty among employees naturally follows job satisfaction. These predicted outcomes are worthy of further exploration. Among researchers of a similar voice, Brown et al. (2002) maintained that the employees’ customer orientation process was central to a service organization’s ability to maintain its market orientation. He suggested that customer orientation has a significant impact  on overall service performance. Service orientation at the organizational level influenced the level of employee job satisfaction (Lee et al,, 1999; Yoo et al., 2000) as well as general organizational commitment (Lee et al., 1999). In summary, the purpose of this research is to explore the impact of service orientation employed by Taiwan’s international tourist hotels on employees’ job satisfaction and organizational commitment. In order to achieve this goal one must pre-determine the association among service orientation, job satisfaction and organizational commitment of international tourist hotels. The findings are predicted to be consistent with past research in related studies. A tentative conclusion will be drawn in hope of strengthening management practices of international tourist hotels. Modern societies in the 21st century are focused on their customers as a primary concern.  Customer relationship management (CRM) has witnessed more customer loyalty and satisfaction as businesses evolve into “customer-orientated” organizations.  To become a customer-oriented firm, promoting and practicing service within the corporate culture is undeniably the trait behind successful corporations.

 

The Joint Effect of Competition and Managerial Ownership on Voluntary Disclosure: The Case of China

Dr. Jianguo Yuan, Huazhong University of Science and Technology, Wuhan, PRC

Dr. Huafang Xiao, Huazhong University of Science and Technology, Wuhan, PRC

 

ABSTRACT

Drawing on prior empirical research examining the determinants of voluntary disclosure separately, this paper empirically investigates the joint effect of managerial ownership and competition in the product market on the levels of voluntary disclosure of listed Chinese companies. Using an aggregated disclosure score to measure voluntary disclosures, the results indicate that managerial ownership is negatively associated with the extent of voluntary disclosure when the degree of competition the company faces is low; this relationship does not exist when competition is high. In addition, firms with lower competition and higher managerial ownership are less likely to make additional disclosures.  Since the Asian financial crisis of 1997-1998, both regulators and members of the business community in East Asia have called for greater corporate transparency. The low level of corporate disclosure has been identified as one of the factors that not only contributed to the Asian financial crisis, but was a stumbling block in the regional economic recovery (Berardino, 2001). Although China was not seriously affected by the Asian Crisis, Chinese companies were criticized for a lack of financial reporting transparency. Chinese regulators have recognized that equity markets require more disclosure in order to function more effectively, and have been actively improving voluntary disclosure in recent years. In particular, Chinese regulators have drawn attention to corporate governance and share structure in the poor level of corporate disclosure and have called for more disclosure in the annual reports of listed Chinese companies. These concerns prompted the “The Code of Good Corporate Governance” (2001) and the “share structure reform” in 2005. Since many studies have examined the impact of corporate governance on voluntary disclosure in China, we focus on the effect of share structure reform on voluntary disclosure, especially managerial ownership reform, in the unique Chinese product market environment. In order to deepen the state-owned enterprise (SOE) reform and explore the implementation of incentive and discipline mechanisms, the State-owned Assets Supervision and Administration Commission of the State Council (SASAC) issued “Provisional regulations on the property rights of enterprise state-owned transfer to management” in 2005 and “Implementation guidelines on further criterion the state-owned enterprise reform (draft)” at the beginning of 2006. The managerial ownership incentives in these two regulations exclude management stock compensation. According to SASAC, managers who were hired through open recruitment, internal competition, or making a significant contribution to the company can own company stock by increasing their own stock held, but the total managerial ownership should not exceed the absolute or relative total control. At the end of September, SASAC and The Ministry of Finance jointly issued “Provisional Measures on the implementation of stock compensation incentives in state-controlled local listed companies” and implemented it one month later. The main type of stock compensation is the stock option. Executives and core employees interested in trading their stock holdings have incentives to improve corporate performance, disclose private information to meet restrictions imposed by insider trading rules, and increase liquidity of the firm’s stock. These regulations serve to closely align the interests of managers with those of shareholders, which will strengthen the internal control of companies and increase the levels of voluntary disclosure. Nevertheless, the levels of voluntary disclosure are also influenced by competition, according to competition theory (Verrecchia, 1983). For instance, voluntary disclosure, if it is characterized as proprietary, can result in competitive disadvantages in a firm’s product market. Firms are thus likely to consider the potential liabilities of providing this information when choosing an optimal level of voluntary disclosure.  In short, there are costs of disclosure in the form of releasing potential proprietary information to competitors. In the presence of these costs, it is optimal for the firm to not disclose information. High managerial ownership aligns the managers’ interests with those of the firms, and it is this combination of high proprietary costs and high managerial ownership that leads to low levels of disclosure.  The US Department of Justice implements the Herfindahl index in its anti-trust activities and has determined that above 0.18 is considered to be anti-competitive, while the benchmark in “Anti-monopoly Law of the People’s Republic of China (draft)” is 0.5. There is a wide gap between China and developed countries. Under this product market environment, an increase in managerial ownership will reduce the levels of voluntary disclosure.  Very few studies (Birt, Bilson, and Whaley, 2006) that we are aware of have been concerned with the interactive effect of managerial ownership and competition on voluntary disclosure. Accordingly, this study investigates the joint effect of managerial ownership and competition on voluntary disclosure. I find evidence that managerial ownership is negatively associated with the extent of voluntary disclosure when the competition that the company faces is low. In addition, firms with lower competition and higher managerial ownership are less likely to make additional disclosures. The evidence suggests that whether managerial ownership is related to voluntary disclosure is determined, to a certain extent, by the competition the companies face. My analysis implies that the increase in managerial ownership not only fails to improve corporate transparency, but also reduces firms’ willingness to disclose information under an environment of low competition in China.  The results of this study provide the following contributions. First, few prior studies have examined the joint effect of managerial ownership and competition on voluntary disclosures. Corporate disclosure policies are endogenously determined by the same forces that shape firms’ governance structures and management incentives, and are also influenced by the competition environment they face (Core, 2001; Verrecchia, 1983). Second, while Chinese regulatory authorities are concerned with management ownership in strengthening corporate governance and voluntary disclosure, our results suggest that increased managerial ownership reduces voluntary disclosure in the low competition environment. This finding has implications for Chinese regulatory authorities, since heavy reliance is placed on managerial ownership to improve the corporate governance and transparency of SOEs. Finally, the paper provides evidence on the role of managerial ownership in an emerging market setting and provides an understanding of comparable management incentives. This is important given differences in the degree of development of the product market.  The remainder of this paper is organized as follows. Section 2 summarizes prior research and develops the hypotheses. Section 3 describes our methods, sample, and data, and Section 4 presents the analyses and results. Finally, some conclusions are drawn.  The distinct roles of principals and agents can lead to an agency problem if both entrepreneurs and investors are utility maximizers (Healy and Palepu, 1999). Jensen and Meckling (1976) argue that agency costs increase with the level of outside equity. The extent of managers’ shareholdings could reduce agency costs, as it serves to align the interest of management with that of other shareholders, which will strengthen the internal control of companies. Based on the theory that the impact of corporate governance on corporate disclosure is complementary, the extent of voluntary disclosure is expected to improve. 

 

The Effect of Wage Differences on the Cyclical Behavior of the Two Genders in the Labor Market

Dr. Nissim Ben-David, University of Haifa and Emek Yezreel Academic College, Israel

 

ABSTRACT

During prosperous periods, which are characterized by an increase in productivity and in wages, the rates of separation from occupied jobs decrease while the probabilities of finding a new job increase. Thus, unemployment rates fall. Empirical observations however, indicate that the magnitude of this decline is not the same for both genders. This paper investigates the effect of the business cycle on the probabilities of transition between employment and unemployment for men and women. It provides a possible explanation of how different changes in variables, such as wages or productivity of each gender, would effect the separation rates and the probabilities of finding a job for each gender, and thus, determine differences in the magnitude and the direction of the change in the rate of unemployment. The rate of unemployment tends to fluctuate between periods of economic prosperity and recession. The causes of these fluctuations in unemployment are changes in the flow of workers in and out of employment as a result of changes in the business cycle. The magnitude of these changes, however, differs between the genders. There is very little recent literature on gender gaps in unemployment rates. There was literature on the subject in the United States in the 1970s and early 1980s (see, e.g., Barrett and Morgenstern (1974); Niemi (1974); Johnson (1983) but few recent papers-perhaps because female and male unemployment rates in the United States have converged. However, this convergence has not happened in all Organization for Economic Cooperation and Development (OECD) countries. The gap in unemployment rates (measured as the female rate minus the male rate) is very large in Mediterranean countries (Spain, Greece, Italy, and France). Next come the Benelux countries (Belgium, Netherlands, and Luxembourg), then the Germanic countries (Germany, Austria, and Switzerland), then Nordic countries (Sweden, Finland, and Norway), and, finally, Anglo-Saxon countries (United States, United Kingdom, Ireland, Australia, Canada, and New Zealand). In a number of the Mediterranean countries, the unemployment problem is largely a problem of female unemployment. The matching function with two-sided search (developed by Pissarides (1984), Mortensen (1982), Diamond (1982) and others) is central to the theoretical literature on unemployment. The main innovation in those papers is that market frictions are modeled by an exogenously given matching function that relates the number of matches per unit of time to the stock of workers and the firms engaged in searching. The matching function thus captures the technology that brings agents together in the market. Wages are set by decentralized bargaining between the worker and the firm after they are matched. Since finding a new trading partner is a costly and time consuming process for both workers and firms, there is a surplus associated with the match, and this surplus is split according to the (asymmetric) Nash sharing rule. The separation rate, as well as the growth rate of the labor force, is exogenously given and together with the matching rate of unemployed workers the unemployment rate is determined.  Many papers have used this basic framework for analyzing the labor market, although some of its basic assumptions contradict empirical findings. During the last 15 years, researchers have changed the basic assumptions of the model in order to make it conform to reality. The gap between the model and reality was large because of an assumption of a constant separation rate. This assumption does not fit key facts that have emerged regarding job flows. First of all, job destruction is relatively more important than job creation over time. That is, business cycles are driven primarily by large episodes of job destruction, with relatively stable levels of job creation (see Davis and Haltiwanger (1990, 1992, 1999), Faberman (2002) and others). Since job destruction is not constant, the separation rate should not be regarded as constant.  Blanchard and Diamond (1989, 1990) have found empirical evidence that during recessions the flow out of employment is the main reason for the increase of the unemployment rate, while the decrease in the flow from unemployment into employment has a secondary significance. The effects of the business cycle on unemployment were also studied by Pissarides (1987, 1990, 2000) who emphasized the importance of changes in the matching function as a major reason for changes in the unemployment rate. Dramatic changes in transition probabilities during the business cycle were also found empirically by Ben-David and Weiss (1995). Ben David (2005) proved that in equilibrium any change that effected the economic value of a filled job would be followed by a change in the separation rate. In this paper, I relax the assumption of a constant separation rate, and use the changes in the separation rates and other labor market flows  to determine the changes in the unemployment rate during the business cycle. I examine the differences in transition probabilities between the genders in an attempt to explain differences in the fluctuation of the unemployment rates during the business cycle. Since most workers are part of a family grouping, the model presented in the second section defines the family as the basic decision making unit. The principal characteristic of this definition is the high level of interdependence between decisions made by spouses. This stems from the fact that the employment status of each member affects the welfare of the entire family. This framework explains how discrimination in wage raises over the business cycle, may be the cause of smaller fluctuations in the unemployment rate of women.  The gender pay gap may be the result of discrimination against women. In the presence of equal pay legislation (which all OECD countries now have) employers can exercise prejudice through differential hiring rates, something that may be easier when labor markets are slack.  Studies evaluating the impact of the degree of femaleness of establishment on wage have, in general, found that intera-establishment gender segregation accounts for a substantial share of the wage gap (see Carington and Troske (1995, 1998), Yoon et al (2003), Reily and Wirjanto (1999), Groshen (1991), Pfeffer and David-Blake (1987), Mcnulty (1967), and Buckley (1971).  Assuming that labor markets are segregated by gender, the transition probabilities of the workers between employment and unemployment are determined endogenously and are also affected by exogenous changes. The model I will present is used to find the effect of the business cycle on the flows of workers in and out of employment, and to determine cross-market influences. In the third section, I determine how the probabilities of the unemployed finding jobs and the already employed’s separation rates are effected by exogenous changes such as changes in wages and production which are typical in both prosperous and recessionary periods.

 

Optimization of IC Manufacturing System by Using SPC/EPC Model

Dr. Jui-Chin Jiang, Chung Yuan Christian University, Taiwan, R.O.C.

Feng-Yuan Hsiao, Chung Yuan Christian University, Taiwan, R.O.C.

 

ABSTRACT

As semiconductor manufacturing decided to make effort on the improvement of the process control capability, process control techniques such as statistical process control (SPC) and engineering process control (EPC) are becoming very popular in IC industry. There is a growing need to construct the rapid-response-to-variation process control system in the future when the customer satisfaction comes from the high-quality product. This research provides an integrated concept of SPC/EPC model to simultaneously monitor, analyze, feedback, adjust, and confirm the IC manufacturing system. An application of the contact etch procedure is discussed. Through an implementation procedure, the prevention by prediction control system is approved to get the more stability and lower variation product quality. Recently, the semiconductor industry in Taiwan grows up very rapidly, and plays the key role in the global market. The 0.13μm manufacturing technology has approved in the production line. And the twelve-inch fabrication also gradually replaces the eight-inch fabrication to be the next generation production trend. In the past quality improvement emphasized process detection and elimination of sources of defect and variation. Elimination starts once assignable causes are found to increase process variation. However, process output usually has significant shift and takes time to identify sources of variation before effective elimination. It usually fails to take immediate and effective process compensation and to respond to system information, so system gets shut down. To build up an effective process control system, it not only needs to effectively eliminate sources of variation but also respond instantly with system feedback information and process control. SPC (Statistical process control), commonly used in the IC manufacturing, is the tool for the process control, and the product quality improvement. Statistical control chart are useful to detect the assignable causes, when the process is out of control. Then give the signal to the engineer to eliminate the assignable causes to improve the process. However, SPC is the kind of “passive” type control chart, because it do not “control” the process, or identify the types of disturbance. This causes some weakness in using SPC in the IC manufacturing, because it is a high-competition, and high-unit-price industry. EPC is designed not to monitor a process, like SPC, but rather to help compensate for the effect of the disturbance. The disturbance is from the uncontrollable or unwilling-to-control parameter or symptom.  EPC may not completely remove the effect of the disturbance; especially the substantial disturbance is introduced into the process., but it can reduce the impact and make process more stable.. The recent shows the integration of SPC and EPC has treated the EPC component as a method of tuning the system. In this combination, SPC is used to detect the assignable causes, and EPC is hired to eliminate the disturbance. This thesis will focus on the EPC application and the integration of SPC and EPC in the IC manufacturing industry to construct the process control system to improve the process quality. This study is to construct the prevention by prediction control system for the IC manufacturing process. It integrates the basic function and sub-system of the process control, the EPC concept and the basic statistical skill to build up all system. Process control is the continuing process of evaluating process performance and taking corrective action when necessary. In the past, process control is mostly explained to be the quality control technique, the quality improvement work by using the control chart. Usually it is also called “statistical process control”. The purpose of the process control is to make the quality characteristic meet the target value, and reduce the variation. The objective for process control is to reduce all kinds of process variation. So process output is close to target. MacKay et al. (1997)  proposed five strategies to reduce variation: adopting strict inspection on yield, adopting or improving feedback control, lowering variation in process input, adopting or improving adaptive control and lowering the sensitivity of process to input parameters. Statistical process control is to analyze measured quality characteristic data and see if assignable causes exist in the process. It is to prevent un-natural process variation and improve on assignable causes before serious effect occurs. Elasyed et al. (1995) thought that SPC was a statistical method that monitors the process for a long term and eliminate assignable causes. SPC and EPC are the two techniques of the process control. The following article will discuss their operation method, differences, and the commonality. Box et al. (1997) thinks SPC use the statistical methods to improve quality, define and evaluate the system performance, find and correct the assignable causes, and monitor the control system. There are some differences between SPC and SQC (Statistical quality control). The traditional SQC emphasizes the product quality, but SPC put the effort on the quality source- process. SPC is process-oriented, and the process is the root cause of the variation. And the variation is the key point of the product quality. In another word, the purpose of SPC is to detect the assignable cause rapidly, take the corrective actions, and avoid more abnormal products. Control chart is one of the most efficient methods.  EPC concept was initiated in the chapter written by Box and Kramer (1992). At that time, the feedback and feed-forward adjustment were frequently applied in some continuous production line, and it was named the process control of the engineering. It was also the concept of the EPC. Automatic equipment or facility were employed in this kind of adjustment, therefore it is also named APC (Automatic Process Control).  EPC assumes there is a clear relationship between the input and output of the process system.  It makes the system output approach to the target value and improves the process capability by adjusting the controllable process parameters to give the compensation or regulation for the system. In the statistical structure, EPC can make the on-time adjustment to compensate the system deviation caused by some disturbance and then reduce the process variation.  SPC and EPC both are the methods of the process control, but there are some differences between them. According to the study of Box (1993, 1997), Elsayed (1995) and Montgomery (1996), the comparison table is shown in table 1. Currently some industry becomes to the synthetic type, the combination of the parts and the procedure industry. The electronic industry is one of the examples. The front-end is the procedure industry, and the back-end is the assembly one. Therefore, the integration of the SPC and EPC will be the study task in the future. In recent years the academic interest in quality management is shifting to integrate the concepts and advantages from both SPC and EPC. Many researches have found positive results after integration of SPC and EPC.

 

Game Theory Analysis on Market Efficiency under Corporate Strategic Alliance

Dr. Chen-Kuo Lee, Ling Tung University, Taiwan R.O.C.

Wen-Jun Yang, Ling Tung University, Taiwan R.O.C.

 

ABSTRACT

The study on corporate strategic alliance is becoming more and more important to the academic community. However, most researchers developed their theorems from social, behavioral, and managerial aspects. Few researchers have studied corporate strategic alliance from an economic, especially industrial organizational, standpoint. Therefore, this study intends to implement an OEM alliance model based upon game theory so as to analyze the market efficiency under a cooperative production organization, thereby demonstrating the contribution made by corporate strategic alliance to social benefits. This study indicates that the cooperative organizations will upgrade social benefits by sharing resources and reducing costs, provided that such cooperative organizations are not designed to restrict production and prices. In short, corporate strategic alliance is more efficient than a complete competitive market as far as resource allocation is concerned.  Facing the tremendous pressure imposed by globalization and technological innovation, more and more corporations in Europe and the United States (particularly information technological industries) have adopted a corporate strategic alliance to compete in the international market since the 1980s. These developments seem to be contradictory to the traditional economic theorems regarding competition and monopoly. At the time that Taiwan joined the WTO and globalization was spreading all over the world, corporate strategic alliance has drawn more and more attention in Taiwan and, meanwhile, an increasing number of Taiwanese corporations were attempting to eliminate market risks via strategic alliance, thereby upgrading their competitiveness (Harrigan, 1985; Porter and Fuller, 1986; Contractor and Lorange, 1988; Bronder and Pritzl, 1992; Grandori and Soda, 1995; Glaister, 1996; Eisenhardt and Schoonhoven, 1996; Lin and Darling, 1999; Saffu and Mamman, 2000).  Apparently, corporate strategic alliance has drawn more attention than ever and, at the same time, have been well accepted in recent years in consideration of risk elimination (Devlin and Bleackley, 1988; Saffu and Mamman, 2000), risk-sharing (Contractor and Lorange, 1988; Saffu and Mamman, 2000), functional supplementation (Contractor and Lorange, 1988; Saffu and Mamman, 2000), and readiness for the market (Contractor and Lorange, 1988). Apparently, corporate strategic alliance is an optimal solution under the current business environment. As a result, a number of corporate strategic alliances have been established (Lung-yi Huang, Yin-shan Huang, Chun-sung Wu, 2004). According to the latest issue of Harvard Economic Review, 200 American corporations have founded 1,592 alliances in 1993 – 1997, of which 48% have ended in less than 24 months (Dyer, Kale and Singh, 2004).  In the past years, corporate cooperation and competition have not been treated as a main topic in the dominant economic theorems (Osborn, 1997). Jean Tirole (1988), Milgrom and Roberts (1992) have made some brief descriptions of this topic. So far, no industrial organizational theorems have made any definition for corporate strategic alliance. This study has therefore, after reviewing relevant archives, defined corporate strategic alliance as follows: <do you want to define oligopsony and/or include in key words? This study chooses the cooperative production - the market structure most similar to oligopsony - as the subject for analysis to compare the cooperative production and oligopsony with respect to the difference in social benefits. This study focuses on two highlights as follows:  (I) Which is more beneficial to consumers, OEM or direct competition?  (II) As far as resource allocation is concerned, is corporate strategic alliance more efficient than a full competitive market?  In this chapter, OEM alliance is used as an example to analyze the market performance of production organization in order to demonstrate corporate strategic alliance’s effectiveness for social benefits. This study has streamlined the model and uses the difference between corporate strategic alliance and oligopsony for the following hypothesis:  (1) Assume there are two markets - A and B; Manufacturer I is only producer in market A and Manufacturer II is only producer in market B; two corporations produce identical products; Manufacturer A’s products can be sold in market B; and Manufacturer B’s products can be sold in market A.  (2) In market A and market B, both Manufacturers A and B are only allowed to determine their sales individually and are unable to control the sales for each other. In other words, there is no way to create a cartel. OEM requires no ownership.  The direct competition model assumes that consumers in market A and market B have no preference over both corporations’ products. In other words, prices are determined by the demand and supply in both markets. Supply increases and prices decrease, and vice versa. P(b) and P(b*) denote the reverse demand functions for market A and market B, respectively, and price is a linear function in relation to supply.  Let X and X* denote corporation A’s sales in market A and market B, respectively, Y and Y* denote corporation B’s sales in market A and market B, respectively, fixed production cost is zero, and marginal production cost is C. Therefore, CX is the total costs for corporation A to produce X units of products. A marketing channel has to be established in order to sell products. Assume a marketing channel is established at a fixed cost α. Therefore, CXα denotes the total costs for [[corporation A]] to produce and sell X units of products in market A, and CY*α denotes the total costs for [[corporation A]] to produce and sell Y* units of products in market B. <Two questions here: 1. You switch from Manufacturer I & II to corporation A & B. I recommend you pick one and stick to it throughout. 2. [[corporation A]] in double brackets above-should both be corporation A? The above doesn’t make sense based on bellows description. To ship products for sale in outside markets, corporations A and B have to bear a fixed cost α for setting up a market channel together with shipping costs. Apparently, the marginal production costs increase and are denoted as C/β0β1 (When β=1, shipping cost is 0). Therefore, (C/β)X*α denotes the total costs for corporation A to sell X* units of products in market B, and (C/β)Yα denotes the total costs for corporation B to sell Y units of products in market A.

 

Ship Mortgage and Vessel Arrest Laws in Mainland China, Hong Kong and Taiwan: A Comparative Analysis

Dr. Felix W. H. Chan, The University of Hong Kong

 

ABSTRACT

Bank and other financial institutions providing ship finance require legal protection just as much as other entrepreneurs.  When a bank finances the purchase of a ship, the borrower has to execute a ship mortgage in favour of the bank.  By definition, a ship mortgage is a security over the ship which enables the bank, on default by the borrower, to take possession of the ship and sell it to discharge the debt.  In order to enforce the ship mortgage, the bank may ask a maritime court to arrest the ship and, through judicial procedure, sell or auction it.  This paper comparatively explores the legal and practical issues regarding the nature and the enforcement of ship mortgages in Mainland China, Hong Kong and Taiwan. When a bank finances the purchase of a ship, the borrower has to execute a ship mortgage in favour of the bank.  By definition, a ship mortgage is a security over the ship which enables the bank, on default by the borrower, to take possession of the ship and sell it to discharge the debt.  In order to enforce the ship mortgage, the bank may ask a maritime court to arrest the ship and, through judicial procedure, sell or auction it.  Ownerships and mortgages of ships may be registered with any registries located in jurisdictions such as Panama, Liberia, Bahamas, Vanuatu and the Marshall Islands.  These registries are often referred to as the “flags of convenience”.  As these countries depend significantly on income from shipping, they open their registers to non-nationals.  Anyone can register a ship on one of these open registers, and a link between the nationals of the flag state and the ship is not required (1). On the other hand, a ship is a highly mobile vehicle of carriage.  It may not return to its home base nor visit the same port again.  It is a well-established international shipping practice that ship mortgages can be enforced world-wide, regardless of where the mortgages are registered.  This paper comparatively explores the legal and practical issues as regards the nature and the enforcement of ship mortgages in Mainland China, Hong Kong and Taiwan. The introduction of laws on ship mortgages and registration were prompted by the China’s need to purchase more vessels and obtain shipping finance through various finance structure such as equity, debt or charters that are consistent with international standards.  In fact, most major shipping companies in China expand their fleets by adopting these types of financing structures or arrangements.  The Rules of the People’s Republic of China Governing Registration of Ships came into force on 1st January 1995 (the Registration Rules).  Under the Registration Rules, the following ships shall be registered: ships owned by citizens of the PRC whose residence or principal places of business are located within the territory thereof; ships owned by enterprises with legal person status established under the laws of the PRC and whose principal places of business are located within the territory thereof, provided that if foreign investment is involved, the proportion of registered capital contributed by Chinese investors shall not be less than 50%; public service ships of the PRC Government and ships owned by institutions with legal person status; and other ships whose registration is deemed necessary by the competent authority of harbour superintendency of the PRC. The Bureau of Harbour Superintendency is the competent authority in charge of the registration of ships, but the Ship Registration Administration (situated at various ports) is the agency in control of actual registration. Before registration can occur, the Harbour Superintendency requires submission of proper documentation (2).  Having examined and verified the application for registration of ownership, the Ship Registration Administration shall issue to the shipowner whose application meets the requirements of the Rules, a Certificate of Registration of Ship’s Ownership and a Certificate of Ship’s Nationality.  Where mortgage is established with respect to a ship of 20 tons gross tonnage or over, the mortgagee and the mortgagor shall apply to register it with the Ship Registration Administration (3) to obtain a Certificate of Registration of Ship Mortgage.  Where two or more mortgages have been established on the same ship, the authority will make the registration in sequence of dates on which the applications were registered, and indicate the respective dates in the register of ships.  Special provisions covering the making of commercial loans also exist.  Thus, a Chinese shipping company wishing to borrow foreign currencies in its own name directly from a foreign entity for the purpose of purchasing vessels, must first seek approval from the State Administration of Foreign Exchange (SAFE).  This is governed by the Administration of Borrowing of International Commercial Loans by Domestic Organisations Procedures promulgated by SAFE in 1997.  Under the laws, all international commercial loans (4) require SAFE approval or risk being declared void.  In addition, all international commercial loans must be registered with SAFE which will then issue a Foreign Debt Registration Certificate, to be presented to authorised banks.  This will enable borrowers to open foreign exchange bank accounts and undertake procedures for remitting foreign exchange abroad.  Bearing in mind that some overseas lenders may be reluctant to take a mortgage over a vessel under the PRC flag, there has been a recent tendency for domestic shipping enterprises wishing to avoid the red tape of the above procedures, to set up subsidiaries in developed maritime countries such as Liberia and Panama.  Under such schemes, loans are advanced to these subsidiaries for the purchase of ships, but once the transactions are completed, both the ownership and mortgages must be registered with the appropriate Liberian or Panamanian authorities.  The offshore subsidiaries may then charter the ships back to their mother companies in China (5).  As part of the transaction, the Liberian or Panamanian shipowners must also assign all their rights and benefits (e.g. assignment of insurance proceeds, charter, sub-charter and freight) in favour of the overseas lender as security.  When a foreign mortgagee enforces his foreign registered mortgage in China, the law of the flag state (not Chinese Law) shall apply to the mortgage of the ship (6).   It is speculated that this strategy will continue to dominate the market of ship financing in the PRC, unless the relevant authorities relax the control of foreign exchange. Although cutting down on the red tape, the SAFE may not be entirely excluded from these proceedings, as the overseas financial institutions involved may require guarantees from the parent companies in China for the loans advanced to their overseas subsidiaries in addition to taking mortgages of the ships.  Assignment by the Chinese companies, the bareboat charter, rights in relation to the various insurances of the ship, and the sub-charter freight may also be regarded as guarantees which require SAFE approval.  Under the Rules Governing Guarantees, only qualified financial institutions authorised by the central government can provide guarantees to foreign lenders for foreign debts.  Government departments and unincorporated institutions cannot issue guarantees for foreign debts.  Any subsequent amendment to the principal loan agreement shall be subject to the consent of the guarantor and to the approval of SAFE. 

 

Human Resource Management and Knowledge Management: A Road Map Toward Improving Organizational Performance

Dr. Fida Afiouni, American University of Beirut, Beirut

 

ABSTRACT

The revolution of information technology is currently breaking organizational hierarchy, boosting communication, and creating a new art of production. Globalization is leading to increased competition, and customer satisfaction is the key word to ensure survival and competitiveness. In this context, knowledge management (KM) has become a must to ensure organizational effectiveness. The knowledge management literature has currently reached the point of acknowledging the importance of people management themes, but has not made the next step of investigating and theorizing these issues in detail. These two fields of human resource management (HRM) and knowledge management are still somehow disconnected. This paper argues that combining human resource management initiatives with those of knowledge management will help improve organizational performance. Drawing on the resource-based view (RBV) of the firm, this paper combines the advances from three different areas of research – intellectual capital, knowledge management, and human resource management – in order to uncover a more holistic perspective on organizational performance. This paper argues that not enough attention has been paid to human capital and its role in the competitive advantage of business in today’s knowledge economy. While much of the early knowledge management literature was heavily focused on technological issues, this has changed, such that the importance of human and social factors has been increasingly recognized. Paradoxically, however, while the importance of these issues has been widely articulated, people management perspectives have yet to be fully developed, and the KM literature has made only partial and limited use of human resource management concepts and frameworks. Drawing on the resource-based view of the firm, the aim of this paper is to draw a road map toward improving organizational performance by combining HRM and KM initiatives. The paper examines the literature of three perspectives from the strategic management literature – knowledge management, intellectual capital, and human resource management – in an attempt to integrate those three fields to improve organizational performance. We will first expose the resource-based view of the firm and discuss it from a human resource management perspective. We will then articulate the knowledge management literature with that of human resource management and intellectual capital and discuss how the combination of those three different fields can improve organizational performance. We conclude that knowledge management initiatives converge with the management of people toward developing intellectual capital and boosting a firm’s performance. The resource-based view of the firm (Wernerfelt, 1984; Barney, 1991; Amit & Schoemaker, 1993; Peteraf, 1993) examines the manner in which organizational resources are applied and combined, the causes that determine the attainment of a sustainable, competitive advantage, and the nature of rents generated by organizational resources. On the basis of this theory, the firm is viewed as the accumulation of unique resources of a diverse nature (Wernerfelt, 1984). Amit and Schoemaker (1993) define resources as stocks of available factors that are owned or controlled by the firm. These resources consist of know-how that can be traded (e.g., patents and licenses), financial or physical assets (e.g., property, plant, and equipment), human capital, and so on (Grant, 1991). On the other hand, Barney (1991) defines firm resources as all assets, capabilities, organizational processes, firm attributes, information, knowledge, and so on controlled by a firm that enable the firm to conceive of and implement strategies that improve its efficiency and effectiveness.  These numerous possible firm resources can be conveniently classified into three categories: physical capital resources (Williamson, 1975), human capital resources (Becker, 1993), and organizational capital resources (Tomer, 1987). Physical capital resources include the physical technology used in a firm, a firm’s plant and equipment, its geographic location, and its access to raw materials. Human capital resources include the training, experience, judgment, intelligence, relationships, and insight of individual managers and workers in a firm. Organizational capital resources include a firm’s formal reporting structure, its formal and informal planning, controlling, and coordinating systems, as well as informal relations among groups within a firm and between a firm and those in its environment. Organizational capabilities characterize the dynamic, nonfinite mechanisms that enable the firm to acquire, develop, and deploy its resources to achieve superior performance relative to other firms (Dierickx & Cool, 1989). Capabilities are dependent upon the firm's capacity to generate, exchange, and utilize the information needed to achieve desired organizational outcomes through the firm's human resources (Amit & Schoemaker, 1993).  In order for organizational resources to become a source of sustainable competitive advantage, Barney (1991) argues that these resources must be rare, valuable, without substitutes, and difficult to imitate. These resources can be viewed as bundles of tangible and intangible assets, including a firm's management skills, its organizational processes and routines, and the information and knowledge it controls. Among the firm’s resources, intangible resources are more likely to produce a competitive advantage because they are often rare and socially complex, thereby making them difficult to imitate (Itami, 1987; Barney, 1991; Peteraf, 1993; Black & Boal, 1994). Furthermore, intangible resources are difficult to change except over the long term (Teece, Pisano, & Shuen, 1997). Most particularly, human capital has long been argued as a critical resource in most firms (Pfeffer, 1994). Recent research suggests that human capital attributes (including education, experience, and skills) and, in particular, the characteristics of top managers affect firm outcomes (Huselid, 1995; Wright, Smart, & McMahan, 1995; Finkelstein & Hambrick, 1996).  Within the field of human resource management, the RBV has made important contributions in the rapidly growing area of strategic human resource management (Wright, Dunford, & Snell, 2001). In resource-based thinking, HRM can be valued not only for its role in implementing a given competitive scenario, but for its role in generating strategic capability (Barney, 1991), for its potential to create firms which are more intelligent and flexible than their competitors over the long haul, firms which exhibit superior levels of co-ordination and co-operation (Grant, 1991). The resource-based view suggests that human resource systems can contribute to sustained competitive advantage through facilitating the development of competencies that are firm specific, produce complex social relationships, are embedded in a firm's history and culture, and generate tacit organizational knowledge (Barney, 1991; Wright & McMahan, 1992).

 

Using the E-CRM Information System in the Hi-Tech Industry: Predicting Salesperson Intentions

Feng-Cheng Tung, Diwan College of Management, Tainan, Taiwan

 

ABSTRACT

With the rapid development of Information Technology, and the increase in different Internet user populations, electronic customer relationship management (e-CRM) plays an ever-more important role in the development and improvement of competitiveness within enterprises. This research combines innovation diffusion theory, the technology acceptance model and also adds two new research constructs, namely, trust and perceived information quality. Thus we set forth a new hybrid technology acceptance model to use the Hi-Tech industry in Taiwan as the focus of research in order to study salespersons’ intentions to use the e-CRM information system. Based on 285 questionnaires collected from 45 electronic corporations in Taiwan, the research finds that studies strongly support this new hybrid technology acceptance model to predict the salesperson intentions to use the e-CRM information system. With the rapid development of Internet, e-Commerce is now being widely accepted among enterprises and the changing trend is very clear. An enterprise’s profit comes from the customers, and customers are the foundation for the basic operations of the life of enterprises. The management of customer relations is a new marketing tool for the new era. Further, electronic customer relations management (e-CRM) allows enterprises to understand customer behavior more specifically than before and anticipate customer needs through online tracking and analysis. Bayon et al. (2002) point out that electronic customer relations management (e-CRM) mainly relies on Internet-or web-based interaction between firms and their customers. It utilizes the web to initiate, negotiate, and finally execute business transactions online. For an enterprise, effectively applying information science and technology to support the e-CRM information system is an imminent and important task to promote that enterprise’s competitiveness and profitability. As the e-CRM information system is growing in importance in enterprise development and business competitiveness, we tried in this reserach to combine innovation diffusion theory (IDT), and the technology acceptance model (TAM) and use the electronic industry of Taiwan as the research focus to study salesperson intentions after putting forward a new hybrid technology acceptance model of the e-CRM information system.  IDT has been widely used for relevant information technology (IT) and information systems (IS) research (Karahanna & Straub, 1999). TAM (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989) has received significant attention in IT/IS acceptance literature. According to TAM, system usage behavior is determined by the intention to use a particular system, which in turn is determined by that system’s perceived usefulness (PU)and perceived ease of use(PEOU). However, e-CRM information system is relatively new at present, and the users of the e-CRM information system are a specific user group. Thus, the existing parameters of TAM are not complete enough to fully reflect e-CRM information system users’ motives. This research proposes two new constructs-- ”Trust” and “Perceived information quality” -- to enhance the understanding of e-CRM information system users.  Finally, this research combines IDT and TAM and also uses trust and perceived information quality as two research dimensions. It puts forward a new hybrid technology acceptance model to use to study salesperson intentions for the e-CRM information system in Taiwan’s electronics industry. By explaining salesperson’ intentions from a user’s perspective, the findings of this research not only help to develop a more user-acceptable e-CRM information system, but also provide insight into the best way to promote new IT systems to potential users.  The concept and practice of e-CRM information system provides the ability to capture, integrate, and distribute data gained on organization Web sites throughout the enterprise (Pan & Lee, 2003). The e-CRM information system has evolved further recently with the emergence of information technology, such as the Internet and web technologies. The system integrates and simplifies all customer-related processes through the Internet (Plakoyiannaki & Tzokas, 2002; Gordon, 2002).The design of e-CRM information system, therefore, must consider all phases of the customer buying process, from pre-purchase and purchase to post-purchase and after-sales service, as well as  the types of interactions required at each stage of the process (Rust & Lemon, 2001). Innovation diffusion theory (IDT) has been widely used for relevant IT and IS research (Karahanna & Straub, 1999). Innovation diffusion theory (IDT) is defined as "the process by which an innovation is communicated through certain channels over time among the members of a social system". It has been widely applied in disciplines such as anthropology, sociology, education, communication, marketing, etc. (Rogers, 1983, 1995). IDT includes five significant innovation characteristics: relative advantage, compatibility, complexity, trial ability and observables (Rogers, 1995). Relative advantage means that innovations can bring greater advantage than traditional methods. Compatibility means consistency among innovations and existing value, past experience, requirements of purchasing managers and procurement. Complexity represents the level of difficulty in understanding innovations and their ease of use. Trial ability refers to the degree to which innovations can be tested. Observability refers to the degree to which the results of innovations can be observed by people. All these characteristics are used to explain the user adoption and decision making process. However, previous studies have found that only relative advantage, compatibility, and complexity are consistently related to the adoption of innovation (Agarwal & Prasad, 1998). TAM has received considerable attention from researchers in the information system (IS) field over the past decade. TAM (Davis, 1989; Davis et al., 1989) originally suggested that two beliefs- perceived usefulness and perceived ease of use - are instrumental in explaining the variance in users’ intentions. Perceived usefulness is the degree to which a person believes that using a particular system enhances his or her job performance. Perceived ease of use is the degree to which a person believes that using a particular system will be free of effort.

 

Valuing Pilot Projects in a Learning by the Finite Difference Method

Dr. Cherng-Shiang Chang, China University of Technology, Taipei, Taiwan

 

ABSTRACT

By using the real option approach, Errais and Sadowsky (2005) value the pilot phase of a project requiring N stages of investment for completion as a compound perpetual Bermudan option.  Further, both market and technical uncertainty are incorporated into the dynamics of revenues and costs of the pilot project in the model of Errais and Sadowsky.  By applying an approximate dynamic programming algorithm, Errais and Sadowsky value the option to invest as well as the optimal exercise policy.  They implement the algorithm for a simplified version of the model in which the revenues are assumed to be constant.  However, this approach may suffer some difficulties: (i) if the dynamics of the state variables do not follow the geometric Brownian motion and (ii) the joint probability density function must be calculated in the case of multiple state variables.  In this article, we solve this problem by employing an alternative approach: the finite difference method.  The partial differential equations (PDEs) for the dynamics of the value function are derived and the solution algorithm is presented in details.  To validate the solution algorithm, we solve the simplified version of the model as proposed by Errais and Sadowsky for illustration.  The results show in good agreement with those obtained by Errais and Sadowsky using the approximate dynamic programming approach.  An R&D investment opportunity depends upon on the resolution of several sources of uncertainty.  The technical uncertainty may be the key component concerning the outcome of each firm’s R&D effort in early stages.  Later on, the market demand uncertainty may be dominant once the product is launched in the market.  The investment decision of the firms would be significantly influenced by both of these uncertainties (Smit and Trigeorgis, 2004).  To study the issues of optimal R&D investment, the real option approach has become the main stream among the methodologies in recent years.  As acknowledged by Schwartz (2004), patents and R&D projects can be regarded as a complex option on variables underlying the value of the project. Majid and Pyndick (1987) use contingent claims analysis to derive optimal decision rules and to value such investment.  They also determine the effects of time to build and opportunity cost on the investment decision but there is no learning involved.  Pyndick (1993) is probably the first to take the technical uncertainty exogenously for randomly advancing through stages of the project.  However, he doesn’t differentiate the development phase and the commercial phase of a project.  Brach and Paxson (2001) model investment in the drug development process using a Poisson real option analysis.  Schwartz and Zozaya (2003) employ a two- factor diffusion model to analyze investment in the IT industry both in acquisition and development projects. Errais and Sadowsky (2005) value the pilot project investments as a compound perpetual Bermudan option.  In their model, both market and technical uncertainty are incorporated and stages of the commercial phase and the pilot phase of a project are differentiated.  Errais and Sadowsky (2005) solve the problem by employing an approximate dynamic programming algorithm, which relies on the independence of the state variables increments.  However, this approach may suffer the difficulties: (i) if the dynamics of the state variables do not follow the geometric Brownian motion and (ii) the joint probability density function for the state variables must be calculated in the case of multiple state variables. In this article, we solve this problem by introducing an alternative approach: the finite difference method.  The finite difference method is easily implemented and straightforward to extend for more uncertainties and state variables to be incorporated.  This unified approach is also applicable to the state variables which do not follow geometric Brownian motion.  The partial differential equations (PDEs) for the dynamics of the value function are derived and the solution algorithm is presented in the paper.  Furthermore, we solve the simplified version of the model for illustration and compare the results obtained by Errais and Sadowsky (2005) to validate the solution algorithm. Here we mainly consider the model proposed by Errais and Sadowsky (2005), and follows its original notation.  Assume that, a pilot phase with N stages of investment has to be completed before launch the commercial phase of the project.  It takes DT units of time for completion of each pilot stage.  Investment decision are made at times t Î L = {T0, T1, …, Tk, …} where Tk > Tk-1 and DT = TkTk-1 for any k ³ 1.  The investment opportunity described becomes a perpetual N-stage Bermudan option.  A perpetual Bermudan option is an option that early exercise may be restricted to certain dates in the future with no expiration date. Let Rt be the revenues of the commercial phase based on the information available at time t.  Assuming the revenue process to be completely driven by market uncertainty and the dynamic of Rt follows: where αR and σR represent the growth rate and volatility of Rt, respectively and is a standard Brownian motion.  The growth rate aK(I, J) and technical volatility g(I, J) are specific to the investment stages and the technical characteristics of the project.  J is the remaining stages for completion in the pilot phase and I Î {0, }, the amount the firm would like to invest in each stage.  The Brownian term zt corresponds to technical uncertainty, which is private to the firm and independent ofand.  By employing the approximate dynamic programming algorithm, Errais and Sadowsky (2005) obtain a closed-form solution for the simplified model as the revenues assumed constant.  To solve the problem alternatively, we present a unified approach: the finite difference method as follow.  We first derive the corresponding partial differential equations (PDEs) for Equations (4)-(5) and (13)-(14), then present the numerical algorithm to solve the PDEs in next subsection.  Equations (15) and (18) are time-dependent two-dimensional partial differential equations.  To illustrate the approach of finite difference method, a portion of a two-dimensional grid is shown in Figure 1.  Suppose that the life of the option is T.  We divide this into l equally spaced intervals of length δt=T / l.  A total of l+1 times are therefore considered:

 

Investigation of the Returns of Contrarian and Momentum Strategies in the Taiwanese Equity Market

Yi-Wen Chen, Hsing Wu College, Taiwan

 

ABSTRACT

This study examines the application of momentum and contrarian trading and portfolio strategies by investors on the Taiwan stock exchange (TSE), to determine if a combined momentum and contrarian strategy can produce better returns than either a pure momentum or a pure contrarian trading strategy. The momentum trading strategy is based on the assumption that markets are relatively rational, and that earnings and price momentum for a stock tend to persist over time until an event, such as a change in earnings, alters the momentum. The contrarian trading strategy involves the creation of a portfolio of low momentum stocks based on the assumption that momentum and price will (inevitably) increase over time. There have been a large number of previous investigations of issues involved with the development of momentum and contrarian trading strategies in other stock markets. However, there have not been extensive investigations of the TSE, with the possibility that the TSE has characteristics that result in different levels of effectiveness for these trading strategies. The present study developed 2 hypotheses based on some of the models used by previous researchers, and tested these hypotheses using data drawn from the TSE reports between January 1990 and December 2005. The findings of the study indicate that momentum strategy was not prevalent in the TSE. The momentum strategies were only in existence in the longest holding periods K=60 regardless of ranking periods. As a result, the hybrid strategy is most effective for portfolio holding periods that are 12 months in duration or less regardless of the ranking period that is used in portfolio formulation. In the short run, the inherent volatility of the TSE often produces price movements that are opposite to those predicted by either the momentum or the contrarian approaches. They hybrid approach to strategy tends to provide a portfolio with a greater degree of flexibility to respond to unexpected movements in the markets. In the long run, the inherent trend of the market towards momentum equilibrium tends to smooth out the returns from the momentum or contrarian aspects of the hybrid portfolio. As a result, the pure momentum or pure contrarian approaches to strategy provide a better return in these longer holding periods than the hybrid strategy. A momentum trading strategy is based on an assessment of the earnings momentum of the firms that comprise the portfolio. It assumes that the market will respond to the general direction of earnings by either bidding up the price of the stock or seeking to sell the stock at the best available price. It also assumes that the historic earnings trend will continue in the future. This trading strategy calls for the purchase of stocks supported by strong earnings momentum and the sale of stocks with weak earnings momentum, with an anomalous event that results in a change in momentum operating as a signal to buy or sell the stock. In practice, it is a model that involves taking a long position on past strong performers and a short position on past weak performers (Jagadeesh and Titman (1993), Chan, Jagadeesh, and Lakonishok (1996), Rouwenhorst (1998), Chan, Hameed and Tong (2000), Grinblatt and Keloharju (2000) Grundy and Martin (2001), Jagadeesh and Titman (2001). The momentum strategy has been effective over time in the majority of the major equity markets around the globe (Hurn & Pavlov, 2003). The contrarian approach as trading strategy focuses on the behavior of investors and is based on the assumption that favorable or negative sentiment towards a stock, a sector, or the market, will reverse over time. As a result, the strategy fundamentally involves the purchase of securities that are out of favor and the sale of securities that are in favor in the expectation that sentiment towards the issues will change. The strategy is based on the theory that long-term undervalued anomalies can be identified, where past long-term losers tend to outperform past long-term winners starting at a certain critical point (DeBondt & Thaler, 1985, 1987; Chopra, Lakonishok, & Ritter, 1992; Fama & French, 1996; Richards, 1997). To some degree, the contrarian trading strategy is an arbitrage approach in which the trader seeks to identify undervalued and over valued stocks. Most investors, however, do not have the relatively long time horizon that is necessary to effectively execute a contrarian strategy (De Long et al, 1990).  The Taiwan Stock Exchange (TSE) is generally considered to be one of the more volatile markets in East Asia, despite its high level of capitalization, strong regulatory structure, and relative maturity as a vehicle for equity capitalization (Titman & Wie, 1999). Nonetheless, there is a relatively high degree of correlation between the earnings of the firms listed on the exchange and the movement of the stocks in response to earnings over the long run. In the short term, however, there have been periods in which the correlation between earnings and price weakens, resulting in excessive upwards and downwards price movements. This suggests that there is some degree of variability in the types of trading strategies that are used in the Taiwanese markets, with trading strategies potentially changing in response to exogenous events. Many prior researchers have studied the contrarian and momentum strategies one at a time to determine their respective effectiveness in explaining or creating abnormal or above-average returns from investments. From the perspective of portfolio management, an abnormal or above average return is one that is consistently higher than the market average and is not due to temporary random variation. It can be argued, however, that the contrarian and momentum effects can happen concurrently, or they are resulting from the same underlying phenomenon. Balvers and Wu (2002, 2006) found that contrarian and momentum effects can simultaneously occur with the same set of assets, and that it is important to consider the interaction between the two types of phenomena. Kim (2003) attempts to resolve this discrepancy by adopting a "trend-bucking" strategy, which combines the contrarian and momentum strategies to yield abnormal higher returns that are greater than the sum of returns from the use of separate contrarian and momentum strategies. This research appears to propose combining the contrarian and momentum to form a hybrid strategy, with the new model yielding a higher return than either a pure contrarian or a momentum strategy, presumably because both of these effects can happen concurrently, or are arising from the same phenomena. This study will examine the use of a mixed contrarian-momentum strategy with a case example portfolio based on historical TSE data. The aim of the research is to examine trading strategies that used the contrarian, momentum and hybrid contrarian and momentum approaches adapted to the conditions found in the TSE. The objective of the study is to determine if a combination of the contrarian and momentum strategies can produce a higher return on the TSE than the use of either a pure contrarian or pure momentum strategy. This study also adds trading volume and institutional investment to examine trading strategies in the TSE.  The study will answer the following research questions: 1.

 

The Corporate Growth of the Firm: A Resource-Based Approach

Francisco Javier Forcadell, Universidad Rey Juan Carlos, Madrid, Spain

 

ABSTRACT

In this paper I analyze corporate growth from a resource-based view, based on a review of the literature on different aspects of growth. As a result of the review, I propose an integrated framework, stemming from the idea that all strategic alternatives available to the firm require and generate resources, in such a way that strategy influences firm resources and firm resources influence the strategy developed. According to this idea, this paper aims to integrate and systemize the different corporate strategy decisions addressed by the resource-based literature, with special emphasis on diversification strategy. In this paper I carry out a review of the literature (mainly from resource-based literature) on corporate growth based on a framework that endeavors to integrate the different corporate strategic decisions. Growth is a dynamic resource-based process with its origin and effect in the resources possessed by the firm. The firm uses its resources to implement strategies and the results of its strategies determine the extent of its resources (quantity, nature and strategic value), which subsequently provide the basis for future strategies. An integrated perspective of strategic corporate decisions may help to highlight the coherence of the studies carried out on different aspects of corporate strategy from a partial perspective, given that all decisions relating to growth must be made after taking impact and implications on the creation of value into account. Thus, for example, the most studied topic in corporate strategy is the relationship between diversification and performance, for which there is no clear cut answer (Palich, Cardinal & Miller, 2000). This could be due, among other things, to the existence of different factors and decisions that influence the relationship and that must be taken into consideration. On the other hand, an analysis of a firm’s boundaries is an area in which the resource-based view (RBV) has scarcely been focused, having been studied mainly in transaction cost economics (Poppo & Zenger, 1998). The aim of the RBV is to provide an answer to the key question of why firms are different and how firms achieve a sustainable competitive advantage (Hoskisson, Hitt, Wan & Yiu, 1999, 437), firms being understood as a combination of resources (Wernerfelt, 1984). There is an area of research that uses the RBV to define development and growth of diversified firms (Mahoney & Pandian, 1992, 367). An explanation of growth from this perspective begins with the structure of resources controlled or possessed by a firm (Kochhar & Hitt, 1998). Optimum corporate growth (1) assumes the existence of a balance between the exploitation of existing resources and the development of new ones (Wernerfelt, 1984). This requires knowledge of current resources, those that will be needed in the future and the strategies used to develop them (2). From a dynamic perspective, strategy should make effective use of previously generated resources and should generate sufficient resources to make future strategies viable. This implies a long term dynamic interaction between strategy and resources (3). The strategic dynamic fit defined by Itami & Roehl (1987: 1) assumes a long term fit of external factors, resources and strategy, which must constantly be aligned and re-aligned (Amit & Schoemaker, 1993). Penrose (1959: 73) refers to the necessary specialization of a firm’s resources in order to sustain growth as a ‘virtual circle’. This specialization requires growth and diversification in order to make full use of idle resources, in such a way that specialization induces diversification. In this way, a firm will try to develop its portfolio of businesses so that they fit its strategic resources and, at the same time, build its resources to fit its portfolio of businesses. Proposition 1. There is a dynamic and recursive relationship between resources and strategy over time. The firm’s current portfolio of available resources therefore determines its future strategy and current strategy determines the resources available in the future. According to the review of literature, I suggest theoretical framework of synthesis (Figure 1), in an attempt to relate growth strategies to the resources a firm possesses and can develop. The framework begins with the resource-based conception of firm as a combination of resources with different strategic values. Growth strategy, with all the decisions it involves, is directed by (surplus) available resources and lack of resources, and determines the future endowment of resources. The growth cycle begins when a new firm develops an innovation (Hoopes, Madsen & Walker, 2003, 893) (4). In the RBV, there is an implicit assumption that an initial supply of resources is available. This means that the entrepreneurship activity will provide a set of initial resources upon which the new firm’s future growth will be based (Zahra, Kuratko & Jennings, 1999), either as a valid business idea or financial, human or technological resources. Recent research suggests an integration of entrepreneurship and strategic management, which Ireland, Hitt & Sirmon (2003, 966) call strategic entrepreneurship. An effective integration of entrepreneurial activity and strategic management may generate synergies and may contribute to corporate growth and success (Ireland, Hitt, Camp & Sexton, 2001), enabling the creation and renewal of dynamic capabilities (Barney, Wright & Ketchen, 2001; Zahra et al., 1999), achieving a sustainable competitive advantage (Hitt & Ireland, 2000) and creating value (Alvarez & Busenitz, 2001). The RBV may help to develop and increase research on entrepreneurship (Alvarez & Busenitz, 2001) and to encourage the integration between entrepreneurship and strategic management (Davidsson, Low & Wright, 2001). Entrepreneurship does not require, although may include, the creation of new organizations. It may therefore take place within an existing organization (Amit, Glosten & Mueller, 1993) as corporate entrepreneurship (5), an important way of building and re-shaping firm resources (Zahra et al., 1999) that affects organizational survival, growth and performance (Dess et al., 2003). Within an existing organization, entrepreneurship covers three types of phenomena (Sharma & Chrisman, 1999), (1) innovation; (2) strategic change; and (3) both internal and external corporate venturing. On the other hand, for Sharma & Chrisman independent entrepreneurship is the process involving the creation of a new organization.

 

Antecedents of Learner Satisfaction toward E-learning

Dr. Yao-kuei Lee, Tajen University, Taiwan

Shih-pang Tseng, Tajen University, Taiwan

Dr. Feng-jung Liu, Tajen University, Taiwan

Dr. Shu-chen Liu, Mingdao University, Taiwan

 

Abstract

E-learning represents a paradigm shift in learning enabled by new information technologies. Since it heralds a radical change in educational method, learner satisfaction must be re-examined so the benefits of using new technologies can be maximized. Based on prior theoretical and empirical research, this study proposed a research model to explain student satisfaction from using e-learning (distance education) as a stand-alone educational method. Sample data were collected online from 3713 students enrolled in a southern Taiwan university’s continuing education division’s distance education courses. The proposed model was supported by the empirical data, and the findings revealed that factors influencing learner satisfaction toward e-learning were, from greatest to least effect, organization and clarity of digital content, breadth of digital content’s coverage, learner control, instructor rapport, enthusiasm, perceived learning value, and group interaction. Implications to learning quality and teacher’s role in the e-learning context were discussed. With the ever-increasing popularity of the computer and the Internet, e-learning has become an important educational tool and method for the global society. More foreign students obtain their degrees through online courses, and online education, as a business block, has shown one of the largest growth cycles on the Internet (Huynh, Umesh, and Valacich, 2003; Symonds, 2003). In Taiwan, well known educational web sites such as EDUCITY (www.educities.edu.tw) and the e-learning national project (elnp.ncu.edu.tw) indicate that e-learning has certainly gained public attention. E-learning can be used as a supplementary learning tool for face-to-face instruction or as a stand-alone distance education method. When used as a supplementary learning tool, its purpose is to improve students' learning efficiency and effectiveness under the conventional teaching paradigm; but when used as a stand-alone distance education method, its purpose is to offer an alternative educational outlet which goes beyond simply promoting learning efficiency and effectiveness. In line with the new trend, the university under study began using e-learning as a supplementary tool in 2000 and, in 2003, the institution advanced its use of e-learning into distance education in the continuing education division. The decision was in accordance with Lee (2007) study, which suggested that using this technology as a distance education method for non-traditional students would be more compatible with their living schedule and needs, and most likely would be accepted; and that non-traditional students had greater intention to use e-learning for distance education. Greater intention to use would lead to greater actual usage (Ajzen and Fishbein, 1980; Fishbein and Ajzen, 1975). In evaluating the outcomes of distance education, researchers compared cognitive factors such as amount of learning, academic achievement, accomplishments, and test and homework scores between traditional teaching and distance teaching (Spooner, Jordan, Gozzine, and Spooner, 1999). The current study attempts to investigate cognitive and affective factors, such as learner satisfaction, in the e-learning context and to explore the antecedents of the outcome. The study’s purpose is not only to provide feedback to educational administrators or faculty members regarding e-learning implementation but also to identify the relevant factors with which they can make enhancements to this important tool. Further, in the virtual learning environment, research like this is needed in order to understand the levels of learning quality and amount of learning that can be assumed by learners and instructors (Lin and Hsieh, 2001). The e-learning system employed in this study is an integrated information system, rather than a single-function, stand-alone system. As such, it provides: (1) digital course content in text, image, audio/video formats; (2) functionalities for interaction between instructors and students and among students (forums, email, chat, etc.); (3) administration of quizzes and homework; (4) grade management; (5) web browser as an interface and Internet-based technological platform; and (6) individual records of system use. The system was designed specifically for educational purposes, and the teaching, learning, and communication between instructors and students can be conducted synchronously or asynchronously. Since an e-learning system is an integration of computer, communication, and digital content adopted for teaching and learning use, a review of related literature for evaluating learner satisfaction should include research related to information system satisfaction, assessment of educational quality, and their relationships. The Theory of Reasoned Action (Ajzen and Fishbein, 1980; Fishbein and Ajzen, 1975) and the Technology Acceptance Model (Davis, Bagozzi, and Warshaw, 1989) have been widely applied to IT-related research. The Theory of Reasoned Action was used for predicting and explaining human behaviors in general and explained their causal relationships as belief→attitude→intention→behavior. The Technology Acceptance Model modified the Theory of Reasoned Action to study the adoption behavior of information systems in particular and suggested that external variables directly influenced the belief constructs. Based upon these two fundamental theories, this study proposed an outline model such that external variables→belief→attitude. Thus, three sets of research variables—external variables, beliefs, and attitudes—were examined for this research. External variables are special information or messages that can guide learners’ attitude development toward e-learning technologies. Mathieson (1991) suggested that external variables should guide the development of information systems.

 

Corporate Governance around the World: An Investigation

Dr. Masrur Reaz, North South University, Dhaka, Bangladesh

Mohammed Hossain, The School of Management, University of Liverpool, Liverpool, UK

 

ABSTRACT

Corporate Governance has received much attention due to Adelphia, Enron, WorldCom, and other high profile scandals happened during the last decade. The study is a comparative study of corporate governance around the world. The scholars have echoed their voice for four systems of corporate governance, such as Anglo-Saxon System, Germanic System, Latin System, and Japanese System. The study indicates that the developing economies are clearly less advanced in the area of corporate governance and need a more stringent focus on their practices as their corporate sector characteristics greatly differ from those in the industrial world. Hence, it is not wise to completely replicate western governance practices in the developing countries. Rather, a detailed picture of their corporate governance scenario would pave way to develop a prudent governance framework by identifying the underlying problem areas.  The foundations of modern corporations can be traced back to the 19th century when entrepreneurs, encouraged by new legislation defining corporations, founded some great companies. The key concept of this legislation was the creation of an entity with its own legal base (Mallin, 2000), being regarded as separate from the owners, yet holding many legal property rights, such as the ability to sign contracts, to sue and be sued, to own property, and to employ.  The consequence was the spectacular growth of business, and the development of ideas regarding the proper management of these corporations. In this respect, there are indications that while management theories regarding what to do have developed, strategies on how to render managerial duties in a way that ensures the best interest of all the concerned parties, have been lacking. Researchers have placed importance in management and organization theories with a lesser focus on actual roles, behavior and accountability of managers. In all senses, the issue of corporate governance has been ignored until comparatively recently. However, recent developments in the business and financial sectors have brought this issue into focus  and evidence for a relationship between sound governance and economic potential has been demonstrated by several researchers (Feinberg, 1998; Johnson and Neave, 1994). It is, therefore, appropriate to consider what the term ‘Corporate Governance’ actually means, and this is considered in the following section. Corporate governance is a practice that deals with concerns that one or more parties involved with organizational decision-making may not behave in the best interest of the organization and associated parties. Over two centuries ago, probably without using the term ‘corporate governance’, Adam Smith (1776)  expressed worries that the level and quality of vigilance demonstrated by managers would be far less than displayed by the partners of a firm. However, it is Berle and Means (1932), whose ideas evolved around the growing separation of power between the executive management of the major public companies and their shareholders, who are considered to be the pioneers in the contemporary thinking about corporate governance.  Indeed, control of company affairs has dominated the thoughts of various scholars in their attempts to define corporate governance. Monks and Minnow (1995)  argue that corporate governance seeks to deal with mechanisms of exercising power and control over the corporation’s direction and behavior; Turnbull (1997)  asserts that corporate governance is the set of all influences affecting the institutional processes involved in organizing production and sale; and Cadbury (1992)  considers it to be the whole system of controls, both financial and otherwise, which enables a company to be directed in right way to the right direction. The OECD (1999) defines corporate governance as a set of relationships between a company’s board, its shareholders and other stakeholders which provides the structure through which the objectives of the company are set, and the means of attaining those objectives and monitoring performance are determined.  Another central theme of corporate governance evolves around how investors in a firm can ensure they get fair return on their invested funds. As noted by Shleifer and Vishny (1997), it deals with the ways in which suppliers of finance to corporations make sure that they will receive a return on their investment. Whenever ownership of a fund/company is separated from its management, issues regarding how to manage the entity/funds in the best interest of the owners emerge, and that is the focus of corporate governance. There is no universal model of corporate governance, but events in the corporate world such as the collapse of many giant corporations, the changing pattern of share ownership, and the internationalization of cross-border portfolios, have led various countries and international organizations to develop some principles of corporate governance which may be followed in the context of different countries (Hussain and Mallin, 2002). Among them, the UK Cadbury Report (1992), and the OECD Principles of Corporate Governance (1999) have been widely accepted around the world.  Amidst growing recognition of sound governance, different countries are addressing the issue of corporate governance from varying angles (Becht et al., 2005). A system of corporate governance refers to a country-specific framework of legal, institutional, and cultural factors through which stockholders and stakeholders can influence managerial behavior (Weimer and Pape, 1999), and there are several such country-specific systems, that work as determinants of corporate governance practices around the world. Scholars such as Scott (1985), Dejong (1989), Moerland (1995a,b), Weimer (1995), Weimer and Pape (1999) have all echoed their voice for four systems of corporate governance, which originate from relatively rich, industrialized countries (1). These four systems are i. Anglo-Saxon System (USA, UK, Canada, Australia); ii. Germanic System (Germany, Netherlands, Switzerland, Sweden, Austria, Denmark, Norway, Finland); iii. Latin System (France, Italy, Spain, Belgium); and i.v. Japanese System. The Anglo-Saxon system mainly stems from the governance practices of the United States and the United Kingdom and English-speaking countries such as Canada, Australia etc. According to Weimer and Pape (1999), firms in these countries must commit themselves to the priority objective of maximizing shareholders’ wealth, and they have strong legal backing to protect shareholders interest (Weimer and Pape, 1999; Franks and Mayer, 1990), through laws that give rise to the principle of ‘one share one vote. Corporations in countries which follow the Anglo-Saxon model are usually governed by one single board of directors comprising both internal and external members (Weimer and Pape, 1999). The external or non-executive directors supervise and advise the managerial directors on major policy decisions in line with the best interest of the shareholders (Lorsch and MacIver, 1989; Bleicher and Paul, 1986). According to FIBV (1996), the stock market is very strong and active in Anglo Saxon countries.  The Anglo-Saxon countries have an active market for corporate control, referred to as a Takeover Market with common takeover techniques such as mergers, tender offers, proxy fights and leveraged buy-outs (Weimer and Pape, 1999). Ownership concentration is very low in these countries, showing a widely-held interest (Weimer and Pape, 1999).

 

Measuring the Efficiency of National Innovation System

Ta-Wei Pan, National Defense University, Taiwan

 

ABSTRACT

The present study applies the data envelopment analysis (DEA) approach, using the traditional DEA, slack-based measure (SBM), and free disposal hull (FDH), respectively, to combine multiple outputs and inputs in measuring the efficiency of National Innovation System (NIS) among a sample of 40 countries. The results indicate that the overall technical inefficiencies of a country’s NIS are primarily due to pure technical inefficiencies rather than scale inefficiencies. Regardless of which model was used—DEA, SBM, or FDH—the best-practice calculations indicate that six countries—Japan, South Korea, New Zealand, Romania, Russia, and Taiwan—operated at the top level. Empirical results also indicate that the SBM model can provide DEA efficiency ratings more clearly. Finally, the Tobit regression reveals that the total expenditures on research and development, literacy, and national productivity significantly influence the efficiency of NIS in 44 countries belonging to the Organization for Economic Cooperation and Development. In global e conomics, the core factors of production have changed from land, natural resources, and human capital to technology and intellectual capital. In the 1980s, Freeman and Lundvall founded the National Innovation System (NIS), providing a way to analyze industrial structures, national resources, development dynamics and cooperation, inputs in education, and the manpower of a country. NIS is the flow of technology and information among people, enterprises, and institutions and the key to the innovative process at the national level. This development in innovation and technology is the result of a complex set of relationships among actors in the system, including enterprises, universities, and government research institutes. Applying the concepts of NIS, Freeman (1988) described and explained how Japan became the most successful country in post-war economics in the first paper addressing the concepts of NIS in the literature. In the last decade, the Organization for Economic Cooperation and Development (OECD) continued to study the influences of NIS due to its importance to create national competitive advantages. Whereas the early literature in the NIS field focused on testing the relationship between selected inputs on selected outputs (e.g., Pavitt, 1985; Evenson, 1991), the present study explores the elements of a country’s socio-economic structure that impact the inputs and the outputs. The data envelopment analysis (DEA) method has been broadly used in previous studies focusing on technical change and innovation in general and on NIS in particular. This method can be applied in a non-parametric way. Moreover, versions of the DEA method are usually utilized in economics when no market prices of inputs and/or outputs exist. This feature supports the use of a DEA-based benchmarking method for NIS because many of the innovative determinants and innovative outcomes in NIS cannot be measured by market prices. The combination of a benchmarking method like DEA with the NIS approach is first intended to compare systems in terms of a set of core variables and core activities that can be assumed to play a decisive role in each innovation system. Thus, the purpose of this paper is threefold: 1. To explore the relative efficiencies of NIS by combining the three methods of traditional DEA, slack-based measure (SBM), and free disposal hull (FDH).2. To examine the effects of moderators (including the total expenditures on research and development, literacy, gross domestic product, population, and national productivity) on efficiency scores of NIS across countries.3. To provide government officials with insights into resource allocation and competitive advantages as well as help with strategic decision-making.The present study is organized as follows. Section 2 describes DEA and Tobit regression methodology. Section 3 identifies the variables in NIS. Section 4 measures the efficiency with which the inputs are transformed into outputs using the DEA method, followed by a testing of the effect of moderators on the resulting efficiency scores. Finally, the conclusions are provided in the last section. As described in Cooper et al. (2000), a variety of DEA models can measure an organization’s relative efficiency. This study firstly adopts three types of DEA models to assess the relative efficiency of NIS. These types of DEA models are described in the following discussion. The majority of researchers utilize DEA to measure efficiency scores; this model can be classified as having an output or input orientation. A country’s NIS is considered to be efficient if it produces the maximum output in a certain environment with given input quantities; it is considered to be technically inefficient if some other units can use no more resource inputs and produce at least the same amounts of outputs and more than at least one output. The CCR (Charnes et al., 1978) output orientation model can be explained using the following formula: where  and  refer, respectively, to the observed values of m outputs and n inputs for each of  decision-making units (DMUs), regarded as the entities responsible for converting inputs into outputs. If, then the target DMU is technically efficient; if  is smaller than one, then the target DMU is technically inefficient. The solution value of  indicates whether the jth DMU serves as a role model or peer for the target DMU.Banker et al. (1984) extended the constant returns to scale (CRS) model to variable returns to scale (VRS) model (or BCC model). If an additional constraint of  is added to Equation (1), then the technology is said to exhibit VRS. The use of VRS specification allows the calculation of pure technical efficiency (PTE) and scale efficiency (SE) effects. According to Banker and Thrall (1992), if the sum of all lambdas for a DMU is greater than one, then there are decreasing returns to scale (DRS); meanwhile, if the sum of all lambdas is less than one, there are increasing returns to scale (IRS). CRS occurs when the sum of lambdas for a DMU equals one. Tone (2001) proposed a slack-based measure (SBM) of efficiency in DEA, with m positive inputs and s positive outputs in the system. This scalar measure deals directly with the input excesses () and the output shortfalls () of the DMU. The SBM formulation in Equation (2) shows that efficiency attains the value of 1 only if all slacks are zero. The FDH (Deprins et al., 1984) model has received a considerable amount of research attention. The basic idea is to ensure that efficiency evaluations are affected only from those performances actually observed. Indeed, this can be accomplished more simply by using the following mixed integer programming formulation. After computing the efficiency score using various DEA models, the present study considered the impact of moderators, including total expenditures on research and development (EXRD%), literacy rate (LIT), gross domestic product (GDP), population (POPU), and national productivity (PROD) on efficiency scores. The efficiency index ranges from 0 to 1 (a lower score implies more inefficiency). However, traditional OLS regression models do not satisfy the assumptions of normally distributed residuals; thus, the Tobit regression model can be used for a dependent variable restricted in its range due to censoring or truncation, making the Tobit model an efficient method for estimating the relationship between an explanatory variable and truncated or censored dependent variable. The Tobit model is illustrated below. In order to employ the Tobit regression model, EVIEWS software was used.

 

Ownership Structure, Board of Directors, and Information Disclosure: Empirical Evidence from Taiwan IC Design Companies

Dr. Yue-Duan Guan, Ming Chuan University, Taiwan

Dr. Dwan-Fang Sheu, Takming College, Taiwan

Yu-Chin Chu, Deloitte, Taiwan

 

ABSTRACT

The demand for information disclosure stems from the information asymmetry and agency conflicts existing between the management and the stakeholders. The solution to agency conflicts lies in the ownership structure and the function of board of directors. This study uses the 2003 annual report of IC design companies in Taiwan and their website information as subjects of evaluation, exploring the impacts of ownership structure and corporate board on the information disclosure level. The study result shows an insignificant negative association between managerial ownership and disclosure level. Secondly, blockholder shareholdings are negatively associated with disclosure level which implies that less information disclosure is required as share ownership is concentrated. Moreover, the qualified foreign institutional investor (QFII) ownership is associated with increased disclosure, indicating QFII playing an active role in the promotion of information transparency. Finally, the results show director ownership has a significantly positive impact on corporate disclosure, suggesting board acts as an effective internal corporate governance mechanism. A series of financial statement frauds (1) have been occurred in recent years. To protect investors’ rights and enhance information transparency, the regulatory authorities of securities markets (2) and information intermediaries (3) have exerted great efforts to advocate corporate governance in a hope to lessen the occurrence of adverse selection and agency problems as a result of information asymmetry. Therefore, Taiwan Stock Exchange(TSE) promoted the establishment of independent directors and supervisors in 2002 and Securities and Futures Commission (SFC) set up the Information Disclosure Assessment Committee to assess information disclosure (4) of the listed and OTC companies in Taiwan. However, only the names of the top one-third (5) of the companies in terms of transparency were published. Due to different degrees of competition, information disclosure environments and regulations in different industries, it is difficult for the results of the joint-evaluation to be directly cited for examination of the causes and consequences of information disclosure. Akerlof(1970) theorizes that the transparency of corporate information can reduce the adverse selection and moral hazard caused by information asymmetry, which in turn can enhance the liquidity of stocks and reduce the cost of equity capital and debt capital(Welker 1995; Botosan 1997; Sengupta 1998). Therefore, the determinants of the corporate disclosure have always been the focus of research. Since the demand for information disclosure arises from the information asymmetry existing between management and shareholders, whether the information asymmetry will lead to severe agency conflicts lies in ownership structure and the function of the board of directors (Jensen and Meckling 1976; Fama 1980; Healy and Palepu 2001). Accordingly, this paper examines the impact of ownership structure and board composition on corporate disclosure. Ownership structure is characterized by managerial ownership, blockholder ownership and QFII ownership, and board composition is measured by the proportion of outside directors in the board and director ownership.  Internet has become a new medium of communication as a result of technology innovation. Considering the real-timeliness and transmission convenience of Internet information, many companies have built websites to publish their financial and non-financial information. Therefore, the transparency of internet information has gained considerable attention recently (Lymer 1999; Craven and Marston 1999; Davey and Homkajohn 2004). Public investors’ access to the corporate information may include annual reports, company websites, business media and informal communication. The evaluation in this study uses annual reports and website contents as sources because annual reports are more reliable and there is a legal responsibility for making false reports and statements. Corporate disclosure is proxied by an aggregated disclosure score of annual report, including background information, summary of historical results, non-financial statistics, projected information, management discussion and analysis, and corporate website information scores (Botosan 1997; Eng and Mak 2003).To distinguish disclosure qualities of all the indicators, content analysis is adopted by two independent researchers to make evaluation based on the accuracy and details of disclosure. The average scores of the two researchers are used to increase the reliability and validity of evaluation results. In this article, the annual reports and website information of the publicly traded companies of the IC designing industry (6) in Taiwan are chosen. The selection of single industry can preclude the interferences arising from the specific accounting regulations and different disclosure environments among industries. Moreover, the findings of the previous studies have shown relative stability in cross-period disclosure policies and the limitation of time and manpower confined us to examine only the most recent annual report and website information disclosure level. The IC design industry is knowledge –intensive with a great emphasis on intellectual property and innovative activities. In view of the proprietary costs, information disclosure may involve early leakage of business intelligence, which may impair corporate competitiveness. Therefore, the impact of ownership structure and board of directors on the information transparency in IC design industry is especially worth exploring. Overall, empirical results show higher blockholder ownership is significantly associated with decreased disclosure, an indication that blockholder plays a substitute-monitoring role to disclosure. In addition, the increase of QFII ownership reduces the information opacity. We also find that increase in director ownership facilitates information transparency as hypothesis expected. The remainder of this paper is organized as follows: Section II reviews related literature and develops hypotheses. Section III describes the empirical model, variables measurement, and sample selection. Section IV presents the empirical findings and analyses, and a summary is provided in the final section. The determinants for information disclosure have long been explored. Factors such as industry competition environment (Verrecchia 1983), signaling (Hughes 1986), news types (Skinner 1994), firm characteristics (Chow and Wong-Boren 1987) and ownership and board compositions (Fama 1980) have been founded to affect corporate information disclosure and reporting policies. The demand for information disclosure in capital markets originates from the information asymmetry and agency conflicts between the management and stakeholders interested parties.

 

The New Techno Culture in the Workplace and at Home

Dr. Richard Gendreau, Bemidji State University, Minnesota

 

ABSTRACT

The new culture of technology started in the last decade of the 20th Century with e-mail and progressed through cell phones, high-speed networks, wireless platforms and iPods. This techno culture brought the workplace and worker’s home into the digital age. New technology was not only entering the workplace and home at a faster pace, but it was changing at an increasing pace. Just after learning how to utilize a new technology an updated version infiltrates the workplace and home. This paper will explore how workers cope with technology and all the complications it causes. The author’s adventure with new technology will look at technophobia, information overload, multitasking paradox, uninvited e-mail, and technostress. The paper concludes by exploring ways to manage technology in the workplace and at home. New Technologies create cultures of use around themselves. One should distinguish between society and culture, i.e., society refers to the community of people while culture refers to the systems of meanings which govern the conduct and understanding of people’s lives. In reading the lecture, the distinction between society and culture is often unclear and the terms are sometimes used interchangeably (American Anthropological Association and Computing Research Association, 1995). Because technology and its mutual interactions with both society and culture are rarely addressed, this paper will focus on the effects of technology on culture and/or society as there can be no argument that both are affected (Murphie and Potts, 2003; and Long and Post, 2006). Technologies create new ways of doing things that were unthinkable prior to the technology (Barnet, 2006). Since authors (Kroeber and Kluckhohn, 1952) have developed hundreds of definitions of culture, this paper will settle on the following definition - “Culture is defined as the benefits, values, behavior, and material objects that constitute a people’s way of life” (What is culture? 2006).  The world has more technology then ever before with technological changes increasing at an increasing pace. The change can be both stylistic and structural. Technology is inflecting a profound and rapid change on everything including people, governments, and art, making a deep penetration into our social life (The Center for the Study of Technology and Society, 2006). Technology can increase the individual worker’s productivity, creativity, and mobility. It shapes an individual’s everyday home life establishing and reinforcing new patterns of belief and behavior. On the dark side, technology can cause individuals to feel stressed, lost, and anxious (Hupli, 1994). This paper proposes that technology has changed the beliefs, values, behaviors, and material objects that constitute a people’s way of life both at work and at home, thus creating “The New Techno Culture in the Workplace and at Home.” It will examine current technology, the information explosion, multitasking, technostress, managing technostress, and directions for future research.  Has the computer replaced the traditional role of the dog as a man’s best friend? This trusted and faithful computer, with its software, e-mail, the Internet, and digital information, offer a way of life that increases an individual’s productivity, efficiency, effectiveness, creativity, and mobility. The computer with its software, online databases, and wireless technology provide the platform for the new techno culture. At home an individual can use the Internet and instant messaging to chat with a friend or business associate on the other side of the world. Wireless technology has given mobility and increased productivity to people all over the world. A quantitative study by OMNI Consulting Group LLP shows that mobile data services has increased global workforce productivity by 42.7% during a five year period 2000-2005 (Bernhard, 2005). The cell phone and related wireless devices allow people to do messaging, browsing, interacting, and conversing over WAN, LAN, Wi-Fi, and PAN networks to stay connected with work and home life (Beaulieu, 2002). “And as the boundaries begin to blur between the different technology kits–PC, mobile, laptop, PDA–so does the distinction between work and personal life” (Easen, 2004). Juggling technology like cell phones, pagers, and electronic conferencing has all the ingredients for a technology meltdown. These networks or information highways will allow everyone to share information and communicate that information on a global vista. The information highway or superhighway is called “Infobahn” in Europe. The Infobahn will transmit voice, data, TV, e-mail, news and more to homes, schools, and business all over the world (Hunter, 1994). The global information highways are developing into a Global Information Infrastructure (GII) for world wide collaboration to solve common problems (Gore, 1994). However, the success of the collaboration is often tied to technology that doesn’t always work just when you need it the most. Nothing increases anxiety like a technology glitch. One can easily forget that communication is about relationships, not technology. The cell phone and its related devices have transformed our society into an information and communication society. This is a society which has changed our culture with its disturbing, disrupting phone ring (song) and loud vocal outbreaks sharing private information with everybody in restaurants and other public places. This is an example of having ‘continuous partial attention’, constantly scanning incoming communications for the most interesting or important information (Maxwell, 2002).  The 15th century printing press started the information age. The computer, with digital data, created an information explosion. “Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one’s status” (Postman, 1990).

 

Digital Music Pirating By College Students: An Exploratory Empirical Study

Dr. Harrison Green, Troy University, Westfield, IL

 

ABSTRACT

This paper addresses illegal music downloading by college students.  A survey was administered in three consecutive semesters.  Results indicate that males download more songs illegally than females who tend to have more respect for copyright laws.  Information Systems majors download more songs illegally than other majors but do not differ in respect for copyright laws.  In general, students tend to be undecided with regard to the legitimacy of copyright laws and the legality of file sharing.  No clear trend has been established with respect to the quantity of songs downloaded illegally.  There is some evidence that attitudes toward illegal downloading are influenced by legislation and prosecution.  Illegal music downloading is a common ethical problem encountered by college-age students.  College students typically have little money and like to listen to music frequently.  Universities have complained that music downloading clogs the network bandwidth and slows down other processing.  The number of lawsuits against illegal downloaders has rapidly increased within the last year.  Within the last couple of years, pay-per-track sites have become more prevalent with broader and broader selections.  Most students at our university have been very wary about trying the pay-per-track services.  In my information systems classes, we normally have discussions on issues such as music downloading.  The overwhelming majority of these students do not appear to have any ethical problems with peer-to-peer (illegal) music downloading.  Most of them have either engaged in this practice or benefited from someone else who has done it.  They claim that CD’s are too expensive and that record companies already have a lot of money.  Some claim that there is no distinction between recording music from the radio and downloading it from another user without paying for it.  Attitudes toward peer-to-peer music downloading might be indicators of questionable ethical approaches to more important business issues.  Most likely many of them would take the same attitude toward the illegal copying of computer software or even stealing copyrighted materials or trade secrets.  According to recent statistics, illegal downloading is a widespread problem.  Estimates from a website monitoring peer-to-peer usage support this assertion .  In the United States, the average number of users at any one time during April 2004 was 4,688,988; during October 2004, there were 4,771,060 users.  Worldwide there were 7,639,479 users in April 2004 and 6,729,450 in October 2004.  Although the exact figures are disputed, it is certain that file sharing on this scale has a financial impact. The Associated Press cited a 7.6% drop in music sales in both 2002 and 2003 .  The phonographic industry blames its declining profits directly on illegal downloading .  They reason that downloaders buy less music, and that they ordinarily would be the biggest music buyers. There is no doubt that illegal file sharing is a serious ethical and financial problem.  There is no easy solution; however, delving into root causes may lead to a more permanent solution than will current efforts by record companies to instill the fear of prosecution. In a one-year period between 2003 and 2004, a total of 2454 lawsuits were filed by the record industry.  A large proportion of the defendants were college students .  In 2004, the Justice Department began to specifically target college students .  Schools can receive notices when illegal activity is traced to their IP addresses.  To avoid liability, universities have taken steps to stop illegal file sharing .  They block peer-to-peer sites and discipline students who are caught. In October 2004, the Supreme Court upheld a lower court decision that ISPs cannot be required to reveal the identity of downloaders who use their service .   College students may have regarded this ruling as a temporary victory.  To circumvent this restriction, record companies have been filing John Doe suits, whereby the record companies can go to court to obtain personal information for offending IP addresses. Since a large portion of downloading takes place internationally , the recording industry has recently begun to target European downloaders.  The first lawsuits in Britain, France, and Austria were announced in October 2004.   Whether or not these lawsuits are significantly reducing illegal activity is disputable.  In a survey conducted by Jupiter research, only one third of downloaders said they would decrease illegal activity in light of current lawsuits .  The largest peer-to-peer downloading web site is Kazaa.  According to the Billboard Internet site, Many Kazaa users have been switching to smaller, less detectable sites such as BitTorrent and eDonkey.  At my current university, students are continually searching for sites that are not blocked by the university’s network firewall. A survey  by Gopal   measured reactions to potential downloading situations.  Carlson  and Taylor  asked respondents whether they download illegally or not as did the Pew Internet study .  Moore  questioned them about the frequency of downloads.  None asked about the estimated total number of downloads.  This figure would provide information related to record company losses as well as the distribution of downloading behavior.  An accompanying question about future peer-to-peer downloading plans could gage current attitude toward downloading. Several surveys reported that most young people believe that peer-to-peer music downloading either is not unethical   or should not be restricted .   When presented with questions about more traditional criminal activities, such as shoplifting, almost all respondents considered them to be illegal and unethical .  In Gopal’s  structural equation model, ethical measures from hypothetical cases strongly influenced tendency to download.  Examples of these cases are expense account padding, failing to report income for tax purposes, or failure to correct engineering design flaws.   Degree of respect for copyright laws, rather than generic ethics scenarios, would be a more precise measure of relevant ethical standards in this context.  The most applicable law for music downloading is the 1998 Digital Millennium Copyright Act (DMCA).

 

Exploring the Impact of Ethnicity on Conflict Resolution in Joint Purchase Decisions

Rina Makgosa, Ph.D., University of Botswana, Botswana

 

ABSTRACT

The literature of family decision making contains studies that have investigated differences in relative influence across ethnic groups. The research reported in this paper concentrates on conflict resolution in joint purchase decisions across ethnic groups. Specifically, it investigates a mix of conflict resolution strategies used by husbands and wives from three ethnic groups in Britain — British Whites, Indians, and African Blacks in joint decisions to purchase major household consumer durable products. Results demonstrate that husbands and wives use several mixes of strategies when resolving conflict in joint purchase decisions. Specifically, it was found that a majority of British White husbands and wives use bargaining more than Indians and African Blacks. Additionally, compared with the British White husbands, both Indian and African Black husbands tend to combine bargaining, assertiveness, and playing on an emotion or all the conflict resolution strategies, whereas British White and African Black wives use all the four strategies more than Indian wives. Overall, results from our study show that ethnicity plays an important role in the understanding of the means of influence in joint purchase decisions. A majority of consumer purchase decisions are made by the family rather than an individual, which is commonly regarded as the critical decision making and consumption unit (Ndubisi and Koo, 2005).  A bulk of the literature that has investigated the degree to which husbands and wives independently or jointly participate in activities that contribute to the decision making process (e.g., Martinez and Polo, 1999; Yavas, Babakus, and Delener, 1994) classified family purchase decisions into three categories of husband dominant, wife dominant and joint purchase decisions.  However, the area of joint purchase decisions remains relatively under researched.  Research into joint purchase decisions is a critical area of study in consumer behaviour because family decisions for most major household durable products are made jointly by husbands and wives rather than dominated by a single spouse (Ndubisi and Koo, 2005).  For instance, joint decisions have been reported for most major household purchases including domestic appliances, entertainment equipment, furniture, automobile, vacation, and house (e.g., Ganesh, 1997; Martinez and Polo, 1999).  Similarly, joint purchase decisions are perceived to be complex, unstructured, and to involve conflict (Kirchler, 1993).  Conflict also means that spouses will attempt to resolve it before a joint purchase decision is made (Spiro, 1983).  From a marketing perspective, it is also crucial to investigate the unique characteristics of joint purchase decisions because their presence in a joint purchase decision is likely to delay a purchase or inhibit repeat purchase (Kirchler, 1993).  Although there are some important investigations in the area of conflict resolution within a joint purchase decision, several other important research questions have not been addressed.  In particular, the role that ethnicity plays in explaining conflict resolution in joint purchase decisions has received little attention in comparison to age, length of marriage, sex role orientation, education, income and occupation.  In fact, research findings tend to be limited to a single culture, mostly North American samples (e.g., Kim and Lee, 1996; Nelson, 1988).  To fill this gap, the purpose of this paper is twofold.  First, it identifies a mix of strategies used by husbands and wives in resolving conflict in joint purchase decisions for major household consumer durable products.  Second, it investigates how husbands and wives from three ethnic groups in Britain (i.e., British White, Indian, and African Blacks) use a mix of conflict resolution strategies.  Joint purchase decisions have been classified as consensual and accommodative decisions in previous studies (Kirchler, 1993; Spiro, 1983).  Consensual decisions are characterised by mutual desires or common objectives including agreement about buying motives, information, and product preferences, evaluative beliefs, and choice criteria between husbands and wives.  Accommodative decisions represent conflict situations in purchase decisions.  This paper focuses on accommodative decisions.  This stems from the fact that it is a common view that while husbands and wives reach a majority of household purchase decisions jointly, they do not always have similar desires in joint purchase decisions.  For example, in one study, 88% of couples reported that they experienced conflict in joint purchase decisions (e.g., Spiro, 1983).  In another study, 69.4% of families agreed that there was conflict between husbands and wives at the time of purchase while only 30.6% denied that conflict existed (Kaur and Singh, 2005).  The limited research into how husbands and wives resolve conflict in joint purchase decisions has focussed on types of conflict resolution strategies (e.g., Nelson, 1988); the extent to which a particular conflict resolution strategy is used in resolving conflict (e.g., Belch, Belch and Sciglimpalia, 1980); how the choice of a particular strategy is affected by such factors such as gender, conflict situation, marital satisfaction and relative dominance (e.g., Kirchler, 1993); as well as a mix of conflict resolution strategies used by spouses and how factors such as age, length of marriage, income, education, occupation and sex role orientation affect the choice of a mix of strategies (e.g., Spiro, 1983; Kim and Lee, 1996).  While some important areas have been investigated in the context of conflict resolution, there are some pertinent questions that are yet to be addressed.  Thus, this study extends the literature of conflict resolution by focussing on how ethnicity impacts the choice of a mix of strategies used by husbands and wives when making joint purchase decisions.

 

The Empirical Study on the Effect Factor of Top Management Remuneration in China

Dr. Jianjun Zhu, Huazhong University of Science and Technology

 

ABSTRACT

This paper covers the research on the effect factors of top management compensation of Chinese enterprises. We adopted the data based on 986 listed companies from 2004 and 2005. We examined the following: business performance, scale of the company, the corporation governance structure, the number of top management, and the registration area of company. The results are as follows: (1) There is a positive relationship between top management remuneration and return on equity (ROE), the total assets, and the number of top management; (2) There is a negative relationship between top management remuneration and the stock-holding ratio of controlling stockholders; (3) Top management remuneration of the eastern area is apparently higher than that of the central and western regions; (4) Top management remuneration is more closely relevant to 2004’s performance than 2005’s. An interesting public sentiment exists regarding CEO compensation (Offstein and Gnyawali 2005). Accompanying the rise in CEO compensation is a corresponding ascension in managerial and academic feelings ranging from curiosity to downright hostility (Nichols and Subramaniam 2001). The phenomenon stands out in the U.S. due to CEO compensation, which on average, is 209 times greater than that of an average factory worker (Nichols and Subramaniam 2001). In China, the disparity of compensation between top management and employees isn’t as great, which has a direct correlation to the history of Chinese economic evolution.  Before carrying on market-based reform, state-owned enterprises in China carry out the principle of distributing according to work. There are two types of distribution —currency and non-currency, of which the non-currency distribution takes a considerable proportion, and is mainly based on the principle of average allocation. As for currency distribution, in general, it chiefly adopts the single salary form. In the same enterprise, the top manager’s salary and the ordinary staff’s salary are pretty much the same, which has seriously dampened the enthusiasm of top managers. At present, people carrying on the market-based reform using the distribution system, have broken the former distribution principle of equalitarianism. This paper attaches great importance to the research of the factors, which influence top management of Chinese enterprises. The scholars have already researched the problem of the operators’ salary for dozens of years. In the beginning, they researched the relationship between executive remuneration and firm performance, and they drew inconsistent conclusions. Jensen and Murphy (1990) studied the relation between executive remuneration and firm performance, they discovered weak relationship. While Conyon and Schwalbach (1999) found that executive remuneration has a positive relationship to firm performance through their empirical research of British and German enterprises. The relationship between executive remuneration and firm performance is a hotspot of scholarly research all along, for the relation should be consanguineous in theory but the relation is complicated and confusing in practice. In addition, other scholars have done research on the relationship between top management compensation and other decisive factors from different angles. Murphy (1986) and Barro (1990) investigated the influence of factors of personal character such as CEO age and their terms on CEO compensation and they found that, when CEOs work in a company for many years, as their age increases, the pay-for-performance sensitivity goes down. Core (1999) and other researchers found that the Board of Directors and the structure of stock right can explain the change in CEOs’ compensation. Harvey and Shrieves (2001), after considering enterprise management mechanism and agents’ venture allocation factors, found that the company administration mechanism and agents’ attitude of risk aversion have great effects on the pay–for-performance sensitivity. The results of empirical research show that the outside directors and the existence of big stockholders strengthen the pay-for-performance sensitivity. The age of CEOs and the ratio of shareholding have a remarkable negative influence. Cyert et al. (2002) established a game model between the Board of Directors and CEOs and carried on the test of real examples, which proves that, with the existence of big shareholders, the Board of Directors as the inside governance and takeover threats can effectively prevent top management from making decisions about their own remuneration. Milbourn (2003) found a model to study the relationship between CEO reputation and stock-based compensation. He showed a positive and economically meaningful relationship between stock-based pay-sensitivities and CEO reputation. His findings are robust in regards to controls for CEO age, firm size, dollar variability of stock returns, and industry effects. Khan et al. (2005) researched how institutional ownership concentration and dispersion affect levels of CEO compensation. They found that the largest owners concentration is associated with lower levels of compensation, as well as with higher ratios of salary to total compensation and lower ratios of options to total compensation. Lizenquan (2000) made use of the relevant information about top management’s shareholding and the annual remuneration revealed in 1998, and did research on it.

 

Optimizing Investment Portfolio by Applying Return Factor Model: A Case Study for the Mechanical Device Industry of China

Dr. Chung-Chang Lien, Leader University, Tainan, Taiwan, R. O. C.

Dr. Chie-Bein Chen, Takming College and National Dong Hwa University, Taiwan, R. O. C.

Ming-Ju Wu, National Dong Hwa University, Taiwan, R. O. C.

 

ABSTRACT

The stock market of China developed very fast in recent years.  In 1990, there are only 10 A-share stocks, and the A-share stock rose to 720 at the end of 1997.  Stock investment has been a common financial activity and it is also a good outlet for investment.  In this study, an investment framework is established for the China listed A-share companies’ stock of mechanical device industry.  This study selects two return factors (book-to-market ratio and momentum) to be the input of DEA-BCC model while the return on stock set as output.  Applying the DEA-BCC model, it is easy to evaluate the stock’s relative efficiency value and to choose several good performance stocks to construct our investment portfolio.  Finally, this study use Markowitz’s mean-variance theory to decide the optimal investment weights for the portfolio. “Efficient Market Hypothesis” (EMH) defined that a financial market can be seen as efficient if the market utilizes all available information in setting the prices of assets.  This will also result in no existence of excess normal-return on the financial market.  However, uncertainty and information asymmetry always exist in realistic world.  And the higher uncertainty and information asymmetry, the more difficult for individual investors to make their investment decisions.  Although many scholars use various types of technical analysis to forecast the stock price, forecast and realistic are often inconsistent.  Even the same technical analysis may cause different results because that different scholars use different time period to analyze a same problem.  Thus, stock investment return to fundamental analysis seems to become more important because it emphasizes the management for companies and it also reflect long-time stock market price. Stock is a good outlet for investment but stock market always exist uncertainty and information asymmetry.  Individuals who hold the information of companies and the stock market are always being limited.  In order to solve this problem, this study proposes an approach based on factor theory and DEA (Data Envelopment Analysis) model for investors to know which stocks owns good efficiency on performances.  Let investors know what stock is worthy to invest.  And then, this study takes the Markowitz’s M-V theory as a foundation to determine the optimal weights for these stocks under the requirement of specific return.  In this study, investment analysis is not constrained on short-term transactions (e.g. daily transaction), but monthly investment.  Finally, this study applies a case of China listed A-share companies’ stock of mechanical device industry to implement the investment efficiency analysis. Fama and French (1993) explicitly point out three factors related to stock return. These factors include: (1) market risk premium (excess of the return on the market minus the risk-free rate); (2) company size (stock price multiply by the stock volume of circulation in market); (3) and book-to-market ratio (book value/market value). According to Fama and French (1993) three-factors theory, Carhart (1997) found the return on good stock (high-return in the past) which performance are usually better than bad stock (low-return in the past).  Carhart (1997) proposed that return on stock exist momentum (i.e. the stock return would be affected by the previous-year return).  Jegadeesh and Titman (1993) also argued that investor can buy stock according to the momentum to gain profits.  Yau (1995) investigates whether the book-to-market ratio phenomenon is presented in Taiwan or not.  Empirical results show that book-to-market ratio is a stable leading indicator of subsequent returns, thereby it is confirmed that the existence of book-to-market ratio phenomenon.  Therefore, company size, book to market ratio, market risk premium and momentum seem to be related mostly to stock return. Daniel et al. (1997) use book to market ratio, company size and momentum develop Characteristic-Based Benchmark Method to measure the stock performance.  Chen (2002) also used book-to-market ratio, company size and momentum to be the input of DEA-BCC model while the return on stock set as output.  Chen (2003) explores that stock return and momentum show significant positive correlations in Taiwan stock market (1995-2002) which the results consistent with the Fama’s theory. According to above description, it is easy to understand that the book-to-market ratio, company size and momentum can explain the return on stock.  However, in Fama’s three factors theory, the correlation between company size and stock return usually appears to be negative.  In other words, the stock of small company usually brings high returns for individual investors than the large one.  Thus, the company size is not suitable to put in the DEA model as an input because the relationship between output and input had better show positive in DEA theory.  Otherwise, the model will influence the accuracy of DEA model.  Consequently, this study use book-to-market ratio, momentum to be the input of DEA-BCC model while the return on stock set as output.  In this study, the monthly average return is selected to be the return on stock.  On the other hand, the monthly average return on previous 12 months is also selected to be the momentum factor.  According to the DEA-BCC model, it is easy to evaluate the stock’s relative efficiency value and to choose several good performance stocks to construct our investment portfolio.  Several researches utilizing DEA method to evaluate the performance of mutual fund have been developed.  The major jobs of those studies focused on measuring the relative efficient values of mutual fund.  For instance, Murthi et al. (1997) used DEA model to study American mutual fund performance.  They defined output variable as excess return and input variables as standard deviation of return, sales charge, commission rate and turnover rate.  The result of that paper found high trading cost really not brought investor high return.  Patrick and Robert (1998) used DEA to analyze 135 common stock funds.  It defined 1, 3 and 5-year annualized return as output variable; sales charge, minimum initial investment and expense ratio as input variables and listed DEA-efficient and near-efficient funds.  Besides, it is easy to find that more and more issues use DEA on financial area in recent years. (Greg et.al, 2005; Cristina et.al, 2003)  This study applies the DEA-BCC model to evaluate stock’s relative efficiency value and to choose several good performance stocks to construct our investment portfolio.  Figure 1 shows the framework of this research.   \Before calculating the efficiency value, book-to-market ratio, momentum and return on stock can be normalized because the output and input variables must be positive number in DEA model.

 

Implement Business Strategy via Project Portfolio Management: A Model and Case Study

Dr. Du Lan-ying, HuaZhong University of Science and Technology, Wuhan, China

Shi Yong-dong, HuaZhong University of Science and Technology, Wuhan, China

 

ABSTRACT

This paper provides an approach to translate business strategy into projects successfully via project portfolio management. It reviews the failure in strategy implementation and limitations of the previous solutions, and compares main differences between “project-based company” and “product-based company”. A complete model is developed based on the theory of project portfolio management and personal experiences in consulting engagement. A case study within China Construction Third Engineering Bureau verifies the model and presents as a good example for application. The model is helpful and valuable for top managers as well as researchers and consultants in strategic management.  It is the bad execution that often damages the success of deliberate strategy. Top managers have more deliberate strategic options that yield less value. A study of 275 professional portfolio managers reported that the ability to execute strategy was more important than the quality of the strategy itself (Mavrinac and Siesfeld, 1998). Strategy implementation was the most important factor shaping these portfolio managers' assessment of management and corporate valuations. In the early 1980s, a survey of management consultants reported that less than 10 percent of effectively formulated strategies were implemented successfully (Kiechel, 1982). A 1999 Fortune article, in a cover story of prominent CEO failures, concluded that the emphasis placed on strategy and vision created a mistaken belief that the right strategy was all that was needed to succeed. The authors concluded that “in the majority of cases - we estimate 70 percent - the real problem isn't bad strategy, it's bad execution.” (Charan and Colvin, 1999) A recent survey on “Strategy implementation of Chinese enterprises” reported that 81.4% of informants believed strategy implementation was the most important factor, while only 18% of informants thought formulated strategies were implemented effectively (Wei hua-ning, 2005). Scholars have created a great deal of approaches, methods, and frameworks for strategy implementation with the development of strategic management theory. For example, The Plan School had advocated plan control system, which was adopted by GE (General Electric) but finally abandoned. Michael E. Porter, the leader of the Position School, concluded three general modes which focus on the transition from corporate strategy to business strategy, but lack practical guidelines in the process of implementing business strategy into operations. Consultants developed an integrated opinion in practice, which synthesized kinds of research work and apply it flexibly. Recent years, the theory of Balanced Scorecard (BSC) comes into attention. Kaplan and Norton (2001) describe an integrated management approach that combines BSC to translating strategy into action. It has been applied in many organizations and achieves good effect, such as Mobil NAM&R, CIGNA, and Rock Water Company. However, the authors find it is not suitable for all kinds of companies when reexamine those organizations that implemented business strategy successfully by BSC. Those organizations have significantly similar characteristics in production and organization structure. Their business management and operations are always centered on certain products, such as automobiles, televisions, beverage, or some other chemical products. This kind of company can be named as “product-based company”. In comparison, another kind of company can be named as “project -based company”, whose strategy business unit consists of projects and not suitable for adopting BSC, such as construction company, IT company(e.g. software-development company), shipbuilding company, venture investment company and so on.  As mentioned above, many limitations lie in the previous solutions provided by scholars and researches. The authors find that for those project -based companies, it is urgently necessary to develop an approach to translate business strategy into projects effectively. This paper aims to solute the problem, which includes two main parts: Develop a model based on the theory of Project Portfolio Management (abr. PPM) and personal experiences in consulting engagement. Present a case study on applying the model within China Construction Third Engineering Bureau (abr. CCTEB). Literature research was done among the premier academic level journals on the subject of project management and strategy management to find all articles about project portfolio management and strategy implementation. These journals include Journal of Project Management, Project Management Journal, Harvard Business Review, Strategic Management JournalJournal of Business Strategy and so on. The keywords used in the research are “portfolio management” and “strategy implementation”, and finally 83 papers are found published in them during 1990 - 2005 in total. By literature research, we conclude a number of useful methods, approaches, techniques, rules, frameworks, models, arithmetic and so on, which enlighten us to form some ideas about solving those two questions.  The personal experiences and lessons learned from consulting with many firms over a long time period are beneficial to form the model and apply the lessons within the 2nd Company of CCTEB. The case study verified or accommodated the model and at the same time it provides a good example for application of the model. The differences between these two kinds of companies are concluded in following three main aspects (see Table1). Firstly, from the viewpoint of business operations, the mode of “product-based company” belongs to make-to-stock (MTS), while the mode of “project-based company” belongs to make-to-order (MTO). The product-based company usually has already carried on production before it receives the user’s order form, according to the existed standard or the product series. The direct goal of production is to replenish stock, keeping a certain number of products in stock to meet user’s demand. In contrast, the project-based company usually carries on production according to the user’s order form. The user may have personalized demand about the function, quality, quantity or date of delivery. After negotiation and contract, the company set up project team to design and product. Secondly, from the viewpoint of technical process, the product-based company is continuous production, while the project-based company is discrete production.

 

The Choice between First-Price and Second-Price Auction by an Informed Seller

Shih-Chung Chang, National Taiwan University, Taipei, Taiwan

Chih-Hsiang Hsu, National Taiwan University, Taipei, Taiwan

Ming-Sung Kao, National Taiwan University, Taipei, Taiwan

 

 

ABSTRACT

This paper analyzes how an informed seller determines the optimal auction format. Standard first- and second-price auctions are considered in this model. Since the seller has superior information about the object, selection of an auction reveals the seller’s private information, which influences bidding strategies. Our main finding is that an informed seller will hold only a second-price auction. The intuition is that bidders’ strategies in each auction format have different sensitivity to the information revealed by the seller. When the seller announces that he will hold a first-price auction, it signals to bidders that they have overestimated competitors’ valuations, and this leads to a lemon problem. Thus, bidders bid the lowest value in the first-price auction. On the other hand, Bidders’ strategies in the second-price auction are independent of the information revealed by the choice of auction format, so revenue in the second-price auction is higher than in the first-price auction. Traditionally, papers concerning auction theory assume an uninformed seller facing a set of bidders with private information. However, the assumption that the seller is uninformed is unrealistic given that he owns the object for a period of time. For instance, the seller may at least have more information about the object’s quality than the bidders do. Milgrom and Weber (1982) first note this problem and argue that it is better for a seller to reveal his or her private information. (1) However, they do not address how to verify the seller’s information. When the auction format can be selected by a seller freely, the seller’s private information may be revealed inadvertently by his choice on the auction format. The information effect may result in a great difference compared with the traditional auction environment, and few papers in the past take notice of this issue. Therefore, in this paper we consider how the effect of information influences a seller’s decision on the auction format.  In this environment, we consider an extreme case where the seller has full information about the value of the object, while each bidder receives only part of the information about it, and information is independent among bidders. The standard first-price and second-price auction are considered in the model. In Milgrom and Weber’s (1982) affiliated environment, both auction formats generate equivalent revenue for a seller when bidders’ signals are independent (2). Thus, an uninformed seller is indifference to sell the object via a first-price or a second-price auction. However, when a seller has superior information about the object on sale, selection of an auction format that reveals his private information, and then the revealed information may stimulate bidders to update their bidding strategy, which was derived based on their own information. Since bidders’ bidding strategies in each auction format have different sensitivity to the revealed information, the revenue equivalence may not hold, even though the independence condition holds. This means that an informed seller may prefer one of the two auction formats.  Our main finding is that the bidding strategies in a second-price auction are independent of the information revealed by the seller’s choice of auction format; thus, a bidder still follows the strategy derived from his private information, even though he has received new information revealed by the seller. However, the announcement of selling the object via a first-price auction signals to bidders that they have overestimated competitors’ types; this induces the bidders to bid less in the first-price auction. The incentive to bid less makes the revenue generated by a first-price auction less than that generated by a second-price auction. As a result, an informed seller will not hold a first-price auction when the information revelation effect is considered. The idea of this paper originates from the revenue-equivalence principle, which was first noted by Vickrey (1961). Vickrey derives equilibrium bidding strategies in a first-price auction when values are drawn from the uniform distribution; he observes that expected revenues in the first- and second-price auctions are the same. Vickrey (1962) later recognized that this equivalence held more generally, that is, for arbitrary distributions. Maskin and Riley (2000) conclude that the revenue-equivalence theorem holds under four main assumptions: (i) risk neutrality, (ii) independence of different buyers’ private signals about the item’s value, (iii) lack of collusion among buyers, and (iv) symmetry of buyers’ beliefs.  A number of papers have explored the implication of relaxing these assumptions to obtain a theoretical auction format ranking. These papers reveal inconsistent results; some favor either the first-price or second-price auction. Ausubel and Cramton (2002) suggest that a theoretical ranking of auction format may be impossible in general. Holt (1980), Riley and Samuelson (1981), and Maskin and Riley (1984) show that the first-price auction generates higher revenue for the seller when bidders are risk averse. Milgrom and Weber (1982) relax the assumption of the independence of bidders’ signals about the item’s value. They showed that the seller favors the second-price auction if the bidders’ signals are affiliated (technically, pair-wise positively correlated). Moreover, they point out that it is better for a seller to reveal his private information. Bikhchandani and Huang (1989) consider an environment with a resale market. They show that the affiliated assumption is the key to determine the revenue ranking. The existence of resale motivates bidders to bid aggressively due to the signaling incentive. They also note that it is still better for a seller to reveal his private information to bidders, even in an environment with a resale market. Maskin and Riley (2000) consider an environment in which bidders are asymmetric; they conclude that there is no definite revenue ranking. Different assumptions about the nature of heterogeneity lead the expected revenue in each mechanism to be higher or lower than in the other one.  The line of research on signaling is also related to our work. Jullien and Mariotti (2002) as well as Cai, Reiley and Ye (2003) have studied signaling by an informed seller with non-verifiable information.

 

The Role and the Ambit of Corporate Governance and Risk Control Frames

Dr. Marco Taliento, University of Foggia, Italy

 

ABSTRACT

The following brief notes deal with the two principal corporate governance frames/models (once considered core norms, key concepts and regulative principles for business administration, management, control), as observed in the major countries: (i) the Anglo-Saxon/American system, that is “market based” and “shareholder-oriented” and (ii) the Latin-German-Japanese paradigm, that is “credit based” and “stakeholders-oriented”. The specific attempt of this study is to highlight the best international theories and practices about modern corporate governance processes, drawing attention to their role and ambit, and, on the other hand, to the need of suitable risk control mechanisms. In particular, it is possible to ascertain the new role played by the risk-variable in businesses, previously considered within internal control scheme, then as a tool of a wider management philosophy (enterprise risk governance). A “sound” corporate governance system is a fundamental premise to the achievement of general objectives as (1) economic-financial returns, (2) going concern condition (or firm’s survival), (3) growth. The critical choice about such a system is not plausible without considering the peculiarities of a given company and, on the other hand, the overall economic, political or social variables characterizing the "macro-environment". In general terms, international business literature identifies two main corporate governance styles: the Anglo-Saxon/American paradigm, that is market based and shareholder-oriented; the Latin-German-Japanese paradigm, that is credit based and stakeholders-oriented. It is not unlikely that a company would gain different levels of results and performance if different governance rules were adopted: the reason can be found in the capability of governance determinants to increase corporate wealth, even through equity markets dynamics estimate (e.g., think of G-index or Tobin’s Q studies, findings, and also Basel II enforcement or Standard&Poor’s corporate governance scoring,  etc).  The recent practice of calculating a “score” for corporate governance models hides aspects of great significance and produces a deep impact on modern companies’ decisions, since precious judgment of shareholders, investors and  other stakeholders becomes ever growingly based on extra-accounting issues. Corporate governance valuation regards, in synthesis: board of directors, auditing, charter/by-laws, anti-takeover provisions, ownership, executive and director compensation, progressive practices, education (1). The above mentioned aspects should be involved by effective management processes inspired by reliable risk control. Indeed, nowadays managing corporate-business “risk” seems to be the principal key to achieve lastingly firms’ success. In this perspective, it is necessary to: contextualize the corporate governance valuation inside the appraisal of the Country-system in which enterprises compete or cooperate; support an high business culture that dedicates greater attention to risk profiles; identify reliable measures and indicators (of property structure, external influence, shareholders rights, stakeholder interests, transparency, disclosure, board functions, auditing, etc.); emphasize the role of the external advice, in order to avoid an illusory opinion (self-perception of "very strong", "strong", "moderated", "weak", "very weak" governance); make processes to enhance companies and groups efficiency; compress the "rating risk" due to incorrect score, which might mislead actual and potential investors (then affecting future performance). Corporate governance systems cause various effects on the way corporations are directed, organised and controlled. Today, responding to the dimension and complexity of markets, corporations need adequate governance rules in order to tackle the challenges of the new scenarios, where the boost of globalisation matches local traditions and culture. The disjunction between capital property and firms’ control is typical of the modern financial systems and represents the basis of modern corporate governance theories. In order to draw the attention to the so-called Agency Theory, also known as Principal/Agent Theory – which is still the main point of reference and synthesis elaborated in the economic-managerial theory – it is useful to remark some doctrinal arguments. Ever since 1776, in “The Wealth of Nations”, Adam Smith alerts for the possible danger connected to the diffusion of stock companies, caused on one side by the lack of will and aptitude of the owners to manage and control their enterprises, on the other side by the lack of incentives that could push managers to act with the greatest efficiency and effectiveness. In detail, Smith affirms that usually shareholders assume a very negligent behaviour, being satisfied with the dividend that managers think it is right to distribute; besides, it is not possible to pretend from managers the same diligence that they would use if they were administrating their own money.  It is in the twentieth century that begins a fertile discussion around the rules and the institutions of corporate governance. In 1932 Berle and Means published the fundamental “The Modern Corporation and Private Property”. They showed the evolution from the traditional capitalistic enterprise, in which property and managerial powers belong to the same subjects, to the managerial or mature enterprise (public company), characterised by severance between property (ownership) and capital administration/disposition (control). The studies of Berle and Means identify a new powerful intangible resource, the “managerial capacity” given by managers. As a matter of fact, Berle and Means highlighted that control over productive assets had passed gradually to the small groups of people who lead the entire firm, presumably, but not necessarily (risk for equity), toward the interests of its owners.  From the economic analysis made in the 40’s by Schumpeter, the separation between control and property seems to be the main cause of the extinction of the entrepreneur.

 

Application of the Grey Prediction Theory Compared with OtherStatistical Methods on the Suitability of Short-term Forecast: Outbound Visitors from Taiwan

Dr. Ching-Yaw Chen, Shu-Te University, Taiwan

Dr. Pao-Tung Hsu, Shu-Te University, Taiwan

Chi-Hao Lo, Shu-Te University, Taiwan

Yu-Je Lee, Takming College, Taiwan

Che-Tsung Tung, Takming College, Taiwan

 

ABSTRACT

The Grey Prediction Theory is aimed at the system model under condition of uncertainty, information integrity, and it needs a minimum only four datum to forecast. It can obtain good predictive results in short-term forecasts. Therefore, we use the Grey Prediction Model to predict the number of outbound visitors. The number of outbound visitors from Taiwan is the data obtained from the Taiwan Tourism Bureau, and is compared for accuracy and error rates with other forecast models. These results can hopefully provide subsequent researchers with references in related topics, and provide related agencies, local governments, and related planning departments in private enterprises with references to Taiwan’s future tourism demands in policy setting and tourism market sales and management strategies. Prediction primarily involves finding possible patterns among existing historical data and using objective and scientific methods and functional relations of variables established from numerical models to perform predictions on uncertain trends. The emphasis is on the selection of the most appropriate forecasting techniques for different forecasting targets, variable relationships, forecasting time lengths, and organization targets. However, facing future uncertainties and incomplete information, forecasted values will likely differ from actual values. In terms of short-term forecasting, in order to effectively reflect market trends, forecasting precision is of the utmost importance because accurate forecasts are a technique and management method that can effectively assist decision -makers in making appropriate judgments and support the reliability of decision -making (Bernstein, 1984; Lewis, 1982). Furthermore, on cost considerations, we still need to scrutinize gathering of information and related costs of time placement. These are all topics that need to be addressed. Currently in the research of tourism demand forecasts, the foundation for modeled predictions are based primarily on establishing quantified mathematical models through single- and multi-variable explanations. For example, Chu (1998) combined seasonal and non-seasonal ARIMA models and sine undulation nonlinear regression forecast models to project the number of visiting international tourists. Generally, analyzing the single variable of time series is done through quantifying and assessing developmental trends of events that have occurred in different time periods among existing historical data. The types of time data are further divided into trend, cycle, and seasonality for further investigation. Multi-variable analyses seek to find the effects of possibly independent variables on forecasted variables, and the factors of influence are also decided through Grey correlation analysis, AHP, Fuzzy AHP, etc., to select the most influential factor of influence to refine the forecasts. Traditional forecasting methods, e.g., the regression model, require large volumes of historical data as a model foundation, and have to fulfill related statistical evaluations. The Grey System Theory is a method that performs systematic establishment for uncertain situations with incomplete information, without requiring large volumes of statistical data and long spans of time series to investigate and understand the given system (Deng2003). Comparatively, the Grey System Theory is simpler and more convenient.  The Grey System Theory was first proposed by Deng in 1982. The theoretical foundation performs correlation analyses or model construction on what it views as an uncertain system with incomplete information, and investigates and understands the system through Prediction, Evaluation, and Decision. Today, this developed theory is widely applied in the field of finance, engineering control, and commerce, etc. Many studies have confirmed that the Grey forecast is most appropriate for forecasts under short term and stable trends (Li, 2002; Chang, Lai & Yu, 2005)   It has been discovered from literature on tourist demand forecasting that many research topics revolve around assessing the accuracy of different forecasting models for tourism demand, revisions of models, correction of errors, and combining different methods into mixed model forecasting, etc. (Archer,1980, 1987; Uysal and John, 1985; Witt, 1995; Wong, 1997; Cho, 2003; Chao, 2004; and Chang, Lai, and Yu, 2005). Therefore, this study is an attempt to investigate the effectiveness of using Grey for short-term forecasting. This is the motivation behind this study. Because Grey forecasting theory can perform predictions with less information thereby avoiding complex formulas and calculations, it can achieve accurate forecasting results and lower costs. Therefore, this study employed a GM (1,1) forecasting model to estimate the number of outgoing tourists in recent years, and compared the verified accuracy and margins of error with related forecasting methods, in attempting to identify the most effective model. Grey forecasts are performed by using GM (1,1) (Grey model of first order differential equation with a singular variable) as a foundation on existing data. In actuality, it is meant to discover the future dynamic situation of various elements within a certain number series. First, we need to perform Accumulated Generating Operation (AGO) for system information to serve as internally for model construction and reduce the randomness of the original number series. If the original series were positive, then it would exhibit an ascending law (Grey exponential law) after the generating operation. The generation function established by AGO is the foundation for system model construction and forecast (Deng, 2003;and Deng and Guo 1996). Therefore, the Grey System Theory stipulates that for all systems that can be broadly defined as energy systems and are compatible with exponential law calculations. This study employs the Grey Forecast Model GM (1,1) as the main framework of research. The research flow chart is illustrated in Figure 1, where a forecast will be made based on the historical data of the number of outgoing Taiwanese tourists from 1990 to 2004, as provided by the Tourism Bureau of the Republic of China, and compare the forecast with the actual number of tourists in 2005.

 

Productivity Growth, Human Capital and Technical Efficiency

Dr. Yahn-Shir Chen, National Yunlin University of Science and Technology, Taiwan

Chao-Ling Lin, Chang Jung Christian University, Taiwan

 

ABSTRACT

From the perspectives of structure-conduct-performance model and resource-based theory, this study employs the data envelopment analysis (DEA) and Malmquist productivity index to estimate the production performance of audit firms in Taiwan. Production performance assessed in this study includes productivity change, technical change, and technical efficiency change. In addition, effects of human capital embodied in partners on technical efficiency of audit firms are examined. A balanced panel of data from 45 public accounting firms is obtained from the 1996-2001 Census Report of Public Accounting Firms in Taiwan. Empirical results reveal that, on average, audit firms experienced a productivity growth of 27% and a technical progress of 31% but a 5% decline of relative efficiency during the sample period. We also report a positive relationship between technical efficiency of the firms and human capital embodied in partners. The environment in which auditors operate has been drastically reshaped by the events taking place in the business world during the first few years of this new century. After Enron, more regulations, including legal liabilities and services offered, impose on audit firms. For example, in the United States of America, the passage of Sarbanes-Oxley Act of 2002 takes from the public accounting profession the major portion of its self-regulatory authority (Whittington and Pany, 2004). Moreover, many clients of audit firms are proceeding with the strategies of globalization and e-commerce for competitive advantages in the changing market. Public accounting profession is obliged to invest a large amount of resources to upgrade their audit technology and to expand their service scope. It is therefore foreseeable that the operating cost of audit firms will rise significantly to meet the new requirements. Under the new economic landscape, how can audit firms survive and grow? One cost-effective and feasible way is to improve their production performance by hiring professionals with advanced education level and more experience as well as by enhancing their employee’s accumulation of intellectual capital. Hence, firstly, this study aims to address the behavior of audit firms responding to the changing environment during our sample period. Using the panel data of 45 partnership audit firms, we employ the data envelopment analysis (DEA) and Malmquist productivity index to estimate the production performance of audit firms during 1996 to 2001. Specifically, we assess the productivity change, technical change, and technical efficiency change of audit firms over time.  In the public accounting profession, the capability of a professional is primarily determined by three elements, pre-employment formal academic education, continuing professional education, and experience accumulated by on-the-job training (Boynton et al., 2001). The three determinants of capability foster the human capital formation of a professional. The professional staff of a typical audit firm includes partners, managers, senior accountants, and staff assistants (Whittington and Pany, 2004). Partners are responsible for assuring that the audit is performed in accordance with applicable professional standards and for maintaining primary contracts with clients (Arens and Loebbecke, 2002). Partners play a critical role in the services provision and they are the owner and claimant of residual interest of an audit firm. From the perspective of resource-based theory, whether partners with advanced academic degree, taking higher level of continuing professional education and with rich experience contribute more to the production performance of an audit firm? Hence, secondly, this study purports to investigate the association between human capital embodied in partners and production performance of the firm.  The remainder of this paper is organized as follows. In section 2, relevant previous researches are discussed and hypotheses are developed. We describe empirical data used in this study and the estimation model and variable definitions in section 3. The results of this study appear in section 4. Finally, we conclude with a summary in section 5. According to S-C-P model, market structure and environment in which a company operates may affect the behavior and, thus, the performance of a company (Bain, 1959; Shepherd, 1972; Scherer, 1980). In the past decade, many audit clients either closed their businesses or moved to Mainland China, southern Asia emerging countries, such as Philippines or Vietnam, for new opportunities due to the faltering regional economy in Taiwan. Traditional audit-market is impacted seriously and competition among audit firms is enhanced as a result of the shrinkage of auditing practices. Moreover, in 1998, the Fair Trade Commission, Executive Yuan, abolished the long standing audit fee standard, set up by the Taiwan Certified Public Accountants Association. It deteriorates the operating environment in further for the public accounting profession. Under the depressed economic conditions, audit firms may not be able to increase their revenue or reduce their costs. Instead, audit firms employ professionals with higher education level and more experience; enhance professional training or take other strategies to make use of resources for their survival and growth. In other words, audit firms maximize the output on a given level of labor and capital input, while minimize the input of labor and capital on a given level of output to improve their productivity gradually. Therefore, based on the S-C-P model, we expect that audit firms will take actions to ensure effective utilization of resources and, thus, annual improvement of production performance under the competitive environment. In this study, improvement of production performance is defined as productivity change during the sample period. As S-C-P model suggested, performance of a company is dependent on the environment in which the company operates and the competitive advantage that the company possess. For example, Peteraf (1993) documents a positive relationship between competitive advantages and firm performance. S-C-P model, however, does not identify the sources of competitive advantages. Prior researches such as Penrose (1959), Prahalad and Hamel (1979), Wernerfelt (1984) and Dierickx and Cool (1989), propose the resource-based theory to fill the gap left by S-C-P model. According to resource-based theory, the competitive advantage of a company is derived from its core resources. In turn, performance of the company depends on core resources. Barney (1991, p.105) suggests that a core resource must have four attributes: (a) it must be valuable, in the sense that it exploits opportunities and/or neutralizes threats in a firm’s environment, (b) it must be rare among a firm’s current and potential competition, (c) it must be imperfectly imitable, and (d) there cannot be strategically equivalent substitutes for this resources that are valuable but neither rare or imperfectly imitable. In an audit firm, partners are responsible for maintaining primary contracts with current clients and build relationship with potential clients and, over time, develop social capital through their client networks. Partners are also responsible for assuring that the audit is performed in accordance with applicable professional standards. Hence, partners are the owners and chief executives of an audit firm. Banker et al. (2003) report that average marginal revenue product of partners is 9 times that of other professionals and 18 times that of other employees. In addition, Hitt et al. (2001) note that in a particular professional service firm partners with education from the best institutions and with the most experience represent substantial human capital to the firm. The human capital embodied in the partners, they added, is a professional service firm’s most important resource.  

Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2017. All Rights Reserved