The Business Review, Cambridge

Vol. 24 * Number 2 * December 2016

The Library of Congress, Washington, DC   *   ISSN 1553 - 5827

Online Computer Library Center   *   OCLC: 920449522

National Library of Australia * NLA: 55269788

Peer Reviewed Scholarly Journal

Most Trusted.  Most Cited.  Most Read.

All submissions are subject to a double blind review process

 

Main Page   *   Home   *   Scholarly Journals   *     Academic Conferences   *   Previous Issues   *   Journal Subscription

 

Submit Paper   *     Editorial Team   *    Tracks   *   Guideline   *   Sample Page   *   Standards for Authors / Editors

Members  *  Participating Universities   *   Editorial Policies   *   Jaabc Library   *   Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work.  All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use www.editavenue.com for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. 

The Business Review, Cambridge is published two times a year, December and Summer. The e-mail: jaabc1@aol.com; Website: BRC  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals.  You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article:  jaabc1@aol.com

 

Copyright 2000-2018. All Rights Reserved

Independent Accountant Opportunity for Wealth Management Reporting on Crowdfunding Engagements

Dr. Michael Ulinski, Pace University, Pleasantville, NY

Dr. Roy J. Girasa, Pace University, Pleasantville, NY

 

ABSTRACT

The researchers examined the statutory provisions of crowdfunding as a type of liquidity for business startups. The opportunity for local and regional CPA firms was noted as larger CPA firms may not be agile as smaller sized firms to handle reviews needed in crowdfunding engagements. Both clients receiving funding from this new source of capital and intermediaries charged with researching the viability of projects could use specialty firms able to finish due diligence review requirements in a timely manner.  Conclusions were drawn and recommendations for firms interested in a fast growing field of wealth management. The financial crisis of 2007-2009 that occurred in the United States and for a longer period globally that led to the highest level of unemployment since the Great Depression of the 1930s caused a major rethinking in Congress concerning how to deal with the crisis. Legislatively, the Dodd-Frank Act(1) sought to curb the abuses within the financial system with its 1,000 page, multiple titled divisions that encompassed perceived abuses and causes of the crisis particularly by banking institutions. As a result, the statute resulted in major overhauling of substantive financial sectors that also included the Volcker Rule(2) that essentially prohibited banks from engaging in risk-oriented investment activities such as in hedge funds, probation (not successful) of “too-big-to-fail” banks, reform of credit rating agencies, the creation of the Financial Stability Oversight Council (FSOC)(3)to regulate financial sectors of the economy that may create financial danger to overall U.S. financial stability, protection of consumers, and other provisions. On the opposite side of the ledger in part, Congress addressed the need to foster greater employment opportunities and did so be lessening regulatory restrictions for new start-up companies in order for numerous investors to add substantial liquidity in relatively small sums for their promulgation. In this paper we will discuss crowdfunding and in connection thereof, the role of dark pools and venture capital. We are particularly concerned with the perceived abuses and the regulatory environment that seeks to lessen fraud and other abuses that inevitably accompany diverse financial strategies. Crowdfunding refers to investments, other than by more traditional means of raising capital, by a substantial number of persons with respect to particular, mostly new, projects. In past years such funding most often occurred by investments by venture capitalists who assumed substantial risks in the hope of attaining more substantial financial rewards with respects to innovative ideas that appear to have financial merit. Although venture capital funding continues to be an important source of capital for newly arising business ventures, crowdfunding has now overtaken venture capital as a major source of financing. Statistically, crowdfunding rose from $6.1 billion in 2013 to $16.2 billion in 2014, and projected $34.3 billion in 2015. Venture capital investments constituted approximately $30 billion in the said comparable time frame.(4) Crowdfunding, albeit a new legally recognized method of raising capital for entrepreneurs, actually has roots severally centuries past but a later date initial use occurred in 1997 when a British rock band, desiring to accomplish a reunion tour, requested and received funds online from fans. It led to the formation of ArtistShare which, according to its website, is a platform connecting creative artists to fans to play a role in the creative process and fund creative artistic activities.(5) It was the first fan funding platform.(6)  Crowdfunding constitutes an investment of capital in order to seek a profit through the efforts of other persons and thus comes within the parameters of the SEC v. W.J. Howie Co. test, requiring unless exempted to registration requirements with the Securities and Exchange Commission (SEC).(7)

 

Full text

 

Improving Quality Using Plackett-Burman Screening Designs

Dr. John E. Knight, University of Tennessee at Martin, Martin, Tennessee

 

ABSTRACT

The improvement of product quality can be achieved more effectively using a sequential methodology that includes experimental design as suggested by the six-sigma philosophy {Pande, Neuman, and Cavanagh (2000), Breyfogle (2003).  Chowdhury (2001), Lucas (2002)}.  Other well-known systematic improvement methodologies developed by Qual Pro Consulting of Knoxville and Joseph Juran (Goetsch, 2014) have different numbers of steps but are equally effective.  The final goal of these methodologies builds toward finding break-through improvements using designed experiments that identify and optimize statistically significant factors that influence product quality in light of the many potential ideas that are available to investigate.  Plackett-Burman designs (Tyssedal, 2008) are multivariate fractional factorial arrays that strive to identify statistically significant main effects while hinting of possible interactions.  The designs also offer the advantage of great reductions in the sample sizes needed to identify significant factors.  These multifactor design of experiments provide far greater analytical ability than traditional one factor at a time testing.   This paper will demonstrate the usefulness of the multifactor design principles as compared to one factor at a time testing.  The approach will be illustrated by the successful application of the principles using a case example in the carbon electrode manufacturing environment.  The introduction of systematic quality improvement methodologies such as six-sigma greatly enhanced the logic and organization of statistical improvements in quality.  Although many of the individual steps for improvement were previously known, a logical sequence of steps that maximized the probability of improvements added new analytical potential.  The sequential steps focused on defining key variables with operational definitions, using repeatability and reproducibility techniques for developing measurement accuracy and precision, achieving statistical control, determining process capability, and testing for significant improvement effects using multivariate statistical experiments,  The incorporation of multivariate statistical testing greatly added to problem solving ability.  Historically, many industrial experiments were simple one-factor-at-a-time tests (called OFAT testing) that relied on the principle of simple cause and effect.  The concept was to stabilize the process (get the process in statistical control) and then vary a single experimental factor.  The effect of that factor would then be judged by viewing the control chart and calculating the numerical effect of the changed factor (either the mean or standard deviation). Although this methodology is simple to understand and calculate, the methodology is far less effective than testing multiple factors simultaneously. Many deficiencies exist in one-factor-at-a- time (OFAT) testing.  First, a major assumption being made is that all of other many factors are “constant” as suggested by the control chart.  Seldom does this condition actually exist in complex processes.  Although the control chart may in fact be in statistical control, the inherent variation in the process as calculated by the control chart is the compilation of the variation in all of factors.  Therefore, the inherent standard deviation of the process being calculated is large given the myriad of factors potentially varying at any one time.   Further, the one factor being tested is not robustly subject to the other forces at play in the system and thus is not tested in the context of noise from other variable factors. Another major assumption being made is that the optimal answer lies along the line of the factor being varied and tested for significance.  In essence, OFAT testing evaluates line values and not surface contours.  Further, since OFAT testing is basically testing differences in statistical means (the control chart mean versus the experimental mean), the sample size to detect reasonably significant differences would be in excess of 30 or more.  If seven different factors were to be tested independently using OFAT, then over 210 samples would be needed and there would still be no testing of interaction effects even though many samples have been taken.  Finally, OFAT limits the probability of finding a significant effect in a limited testing period since the test only evaluates a single factor at a time.  If 10 factors were to be tested, each would need to be tested sequentially rather than simultaneously, thus increasing the total testing time and total sample size needed. 

 

Full text

 

Building Trust and Agreement in Negotiations

Dr. David A Robinson, RMIT Asia Graduate School, Vietnam

Dr. Kleanthes Yannakou, RMIT Graduate School of Business and Law, Australia

 

ABSTRACT

This article expands the theme of ‘meta-modelling’ to embrace an aspect of negotiation theory that never seems to date. To skilfully craft solutions that not only give negotiating parties a short-term ‘win’ but also build a foundation for long-term mutual benefit must surely be the quintessential prize sought by organizations and governments engaged in diplomatic relations and negotiations. But why is it so seldom achievable in one-on-one negotiations between individuals or small groups, whether business, community, or personal relationships?  This question has been pondered by many and remains one of the most important aspects of leadership and management. This paper seeks to answer it by integrating negotiation styles theory and traditional wisdom about how to negotiate with allies and adversaries within the values journey meta-model. It examines how ultimate collaborative (win-win) solutions can be brought to fruition when trust and agreement are forged in equal measures within a context of high-level shared values represented by the third paradigm of the values journey model. Negotiation was addressed as one of the themes in the meta-model series (Robinson, Morgan and Nguyen, 2016) and negotiation styles have previously been re-positioned within a values framework (Robinson and Nguyen, 2016), thereby providing a framework by which to predict an individual’s negotiating position in an effort to pre-empt their propensity and ability to seek collaborative outcomes. It was concluded that individuals living higher-level values will be best-placed to win in any negotiation. That being the case, it presents the axiom that when both parties enter into negotiation from a high-values base there is a high propensity for both parties to win. A propensity for collaboration within a stable long-term business relationship has been termed alliance capability (Anand and Khanna, 2000) and been associated with strategic competitive advantage (Ireland, Hitt and Vaidayanth, 2002).  This paper further integrates traditional leadership and management wisdom surrounding negotiation strategies with particular regard to allies, adversaries, bedfellows, fence-sitters and opponents (Block, 1987). The primary aim is to conceptualise how Block’s stakeholder categories and corresponding negotiation strategies relate to the negotiation styles proposed (Robinson and Nguyen, 2016). A secondary aim is to expand the scope of the values journey meta-model by illustrating how Block’s model is aligned it. Previous work by Robinson and Nguyen (2016) combined negotiation and personal values, indicating how each of five negotiating styles has congruence with particular steps in the values journey. Two main implications emanated from their work in this field: Firstly, if the value station can be discerned, the negotiation style and preferred outcome can be pre-empted. Secondly (and alternatively) when a person’s negotiation style is known then their values can also be discerned.  Based on the illustrative congruence between values and negotiating styles depicted in Figure 1, it follows that collaborative-style negotiations correspond to integrative-synergistic values, known as high-level values.

 

Full text

 

The Impact on Firm Value of LIFO Adoptions Revisited

Dr. John R. Wingender, Jr., Creighton University, Omaha, Nebraska

Dr. Thomas A. Shimerda, Creighton University, Omaha, Nebraska

Dr. Thomas J. Purcell, Creighton University, Omaha, Nebraska

 

ABSTRACT

In this paper the impact of the corporate decision to switch their GAAP inventory valuation to the LIFO (Last In, First Out) method.  Research from 30 to 40 years finds significant positive abnormal returns from the adoption of LIFO.  However, economic conditions were very different then with the high inflation rates in the 1970s than in the 21st Century.  We replicate these studies with data starting in 2000.  In our sample we find a significant positive impact on firm value from LIFO adoptions which is surprising given the low inflation environment of this sample.  Traditional work on the impact of firm value from managerial decisions to change GAAP postulates that accounting changes do not change firms’ cash flow, thus should have no impact on firm value.  As the Literature Review section recounts, most all tests of accounting changes with event methodology indicate no statistically significant change in firm value as measured by the average abnormal return on the event date of the change in the accounting method.  The exception to the rule has been switches from FIFO (First In, Last Out) method to LIFO (Last In, First Out) method.  There are many reasons for this finding.  The main reason is that switching to LIFO in high inflation times leads to immediate costing to increase, with no change in actual cash outflow or change in cash value of inventory.  An increase in accounting expenses leads to lower earnings before taxes.  This leads to lower taxes, which is a lower cash outflow.  This results in higher after-tax cash flow today.  Thus there is a direct impact on cash flow without any change in overall risk which should lead to increased firm value today.  Although the accounting changes washout over time, the impact on the time value of money from getting cash sooner rather than later is significantly positive.  A conceptual case can be made for the use of LIFO in some inventory settings, such as when the nature of the inventory assets acquired, stored and used results in a physical flow best characterized by the last items in as being the first transferred to customers.  For example, businesses that deal in unrefined ore would more than likely add new purchases to the top of the pile and also take from the top for use in its operations.  LIFO primarily has been adopted in the United States not on conceptual grounds but due to its tax deferral advantages.  Since 1939 the Internal Revenue Code has allowed taxpayers to use the LIFO method to calculate taxable income, with the requirement that the adopting taxpayer implement LIFO in reports to shareholders and other users (the so-called LIFO conformity rule).  Taxpayers may adopt LIFO without requesting advance permission from the IRS, but once adopted, advance permission is required to discontinue LIFO.  LIFO matches current costs of inventory against revenues generated from sales of that inventory.  As a result, balance sheet inventory amounts generally are lower, especially during periods of rising prices for replacement goods.  If business price cycles are fluctuating, LIFO will tend to smooth the impacts and decrease the likelihood that unrealized holding gains and losses in beginning of year inventory items will be recognized.   An unavoidable consequence of adopting LIFO for tax advantages is that reported income will generally be lower than if FIFO had been used.  International Financial Reporting Standards (IFRS) do not allow the use of LIFO.  LIFO provides benefits during periods of rising prices.  Price levels of inventory components in the U.S. economy have not risen significantly in recent periods, suggesting that LIFO adoptions should be waning.  However, as the data below indicate taxpayers are still adopting LIFO. 

 

Full text

 

Evaluation of Questionnaire for Transfer Pricing Issue of SMEs in Europe

Dr. Veronika Solilova, Mendel University, Brno, Czech Republic

Dr. Danuse Nerudova, Assoc. Prof., Mendel University, Brno, Czech Republic

 

ABSTRACT

Although SMEs present more than 99 % of enterprises acting in the non-financial business sector in EU and contribute significantly to national and global economic growth, they are facing a lot of obstacles resulting into the higher compliance costs of taxation and lower participation on the international markets. Our research focused on the transfer pricing of SMEs and its compliance costs, which present one of the obstacles which SMEs are facing. The current approach of transfer pricing for SMEs and its related costs were evaluated based on the results of the questionnaire performed in Europe. Based on the results we can concluded that SMEs would appreciate the introduction of specific measurements for transfer pricing which would decrease their increased compliance costs of transfer pricing. Their costs for managing of general transfer pricing requirements were estimated up to EUR 2,000 per year, however, in case of documentation up to EUR 6,000 per year.  The European Commission (2003) defines the Small and medium-sized enterprises (hereinafter SMEs) according to the number of employees, turnover or balance sheet total as enterprises which employ less than 250 employees and have an annual turnover of less than EUR 50 million, and / or their balance sheet total is less than EUR 43 million. The European Commission (2015) states that SMEs present 99.9 % (i.e. 22.3 million) of all enterprises acting in the non-financial business sector in 2014.  Although SMEs contribute significantly to national and global economic growth (i.e. 28 % of GDP in EU28), they are facing a lot of obstacles such as increased level of regulation, reduced availability of skilled staff, 27 different tax and accounting systems and others. Even that many tax and administrative requirements may appear to be relatively “neutral” for business of all size, these requirements include higher fixed costs associated with tax and compliance regimes, as SMEs do not possess enough human and financial capital for coping with these issues contrary to the large enterprises. Therefore, as regard the international aspects of SMEs, European Commission (2010) states that only 5 % of SMEs have subsidiaries abroad contrary to 44 % of SMEs which perform international activities, such as exporting, importing, investing abroad, cooperating internationally, or having international subcontractor relationships, within the EU.  This situation is affected by the fact that international activities and having subsidiary abroad are related with the international taxation issues, transfer pricing, problems with cross-border loss compensations and higher financial costs and business risks. Therefore, many governments introduce measurements mainly in tax area, such as tax preferences, special provisions, specific tax rules and simplification measures for SMEs to reduce these negative impacts.  The aim of the paper is to evaluate the current approach for the transfer pricing issues of SMEs and their compliance costs based on the results of the questionnaire performed for European enterprises. The paper presents the results of the research in the project GA CR No. 15-24867S „Small and medium size enterprises in global competition: Development of specific transfer pricing methodology reflecting their specificities“.  Generally, international transfer pricing is subjected to the strict tax regulations which is related to high compliance costs of taxation. In EU transfer pricing compliance means adherence to the arm's length principle in line with Art 9 of the OECD Model Tax Convention and in line with the OECD Transfer Pricing Guidelines for Multinational Enterprises and Tax Administrations (hereinafter as TP Guidelines) that provide guidance for the application of the arm's length principle to the pricing for tax purposes and to the cross-border transactions between associated enterprises. However, as mention Solilova and Nerudova (2016) TP Guidelines make no direct distinction between types or sizes of MNEs, i.e. all enterprises, regardless of their size, are subject to the same principles and recommendations.

 

Full text

 

Teaching Economics, In-class versus Online Effectiveness

Dr. Doina Vlad, Seton Hill University, Greensburg, PA

 

ABSTRACT

This research paper looks into the advantages and disadvantages of switching from traditional in-class teaching of economics, to online teaching. The research data comes from student evaluations and surveys. Some advantages from the online class delivery format noted by students are: time saved not having to travel to and from school, especially during the wintertime and for night classes; the advantage of having recordings available for them, so they can listen to them as many times as needed until they feel confident in mastering the material; students enjoyed learning more about the technology and new software, which are transferable skills to the modern workplace; increase student self-confidence and the ability to work independently in an online environment.  For future research I want to include student assessment measures and compare the learning achieved in the regular face-to-face classes to results achieved by students in the online courses.  Let’s take a walk on one of the big universities campuses and look around; what we’ll probably see are buildings, parks, a Student Center, sports arena, and many buildings and places meant to make students feel comfortable and "live the true life of a student." That happened to me as well while in Graduate School. I remember one of my "take a break from studying" routine during a cold day was to "get lost" in the Student Center lounge, many times with a cup of coffee in front of a TV watching something that wasn't really interesting, but relaxed me; or during a sunny day, walking around the lake and sitting on the benches and looking at the water, that relaxed me as well. Fast-forward 15 years later; how do students relax, interact, and what do they expect from the "college experience" today?  Firstly, cell-phone and computer based technology provides some choices for them: the daily time spent on Facebook, Twitter, and many other related virtual activities, results in them spending less time for real physical interaction among students. Secondly, the economic environment is tougher, with college costs increasing every year, doesn't make it a choice for many students to be only full time students, they have to work and be full time students. Also, they have to do it in the same 24 hours we used to be students only. In this type of environment, it is no wonder all the expensive buildings and facilities that the universities spent lots of money on are not used the way they were intended. So, what is the future of higher education? No one really knows, although we can speculate. Part of the speculation is the feeling that everything in the higher education environment moves faster now, due mostly to newer technological changes. When you open up the world and allow information to flow freely or at a very low cost, the question of the value of traditional education comes naturally. Add to that, the pressure of high costs associated with earning a degree. At this point, you have to consider ideas floating around on how to change existing models to make learning and earning a degree more convenient and more affordable. What is most impressive, however, is the pace of change: from “Massive Open Online Courses” (MOOCs) to competency-based education, blended courses, and flipped classrooms. All of these choices "tests the waters” for a new model of academic teaching, driven by the feeling that the higher education is in dire need of change.  The decrease in demographics is an approaching reality that has been affecting the student population sizes during the last few years. That decrease will continue to take place for many more years in the future. In this environment, universities have to fight really hard for student enrollments. Some of them -especially the small schools- have to become very creative to be able to keep their doors open.  There is a growing body of literature on online education learning outcomes and student learning satisfaction.  Wiechowski and Washburn (2014) have examined course satisfaction scores and student learning outcomes for more than 3,000 course evaluations from 171 courses during the 2010 and 2011 academic years.

 

Full text

 

Monitoring and Accelerating Structural Change via Exports: A Capability Based Approach for Turkey

Dr. Hayrettin Kaplan, Marmara University, Istanbul

 

ABSTRACT

Development is shifting resources from low productive activities to high productive ones. So development should be understood as a dynamic endless process. The process should be responsive to the development of the capabilities that a country has. In this regard we try to determine the activities that a developing country should focus when the already developed capabilities are taken into account. We monitor the development of export performance and the structural change Turkey has experienced between 1995 and 2013. We evaluated the existing industry structure and determined the potential sectors that are more productive and suitable with the capability stock of Turkey. These sectors are proposed to be the potential accelerators of the ongoing structural change.   Development is a process of structural change towards sectors which have higher productivity. Since sectors are differentiated among their productive capacity and demand elasticity, heading towards more efficient sectors increases the overall productivity in the economy (Prebisch, 1950; Kuznets, 1966, Paus, 2012). During the process of structural change, developing countries first tends to shift resources from agriculture to industry in the sense of Lewis (1954) by importing foreign technology and capital to increase  productivity. As the country develops, increasing productivity via importing capital and technology tends to reach its limits in conjunction with the diminishing inactive labor force supply in the agriculture sector (Eichengreen et. al. 2011). But as development in the sense of structural change towards more productive sectors is an endless process; countries should focus on and shift resources towards more productive sectors within the industry (Hausmann, Hwang ve Rodrik, 2005; McMillan ve Rodrik, 2011; Rodrik, 2011).  There raises two issues: (i) which sectors would increase the productivity of the country most, (ii) has the country own enough capabilities to have production in those sectors efficiently? In other words, to continue the structural transformation process, a country should shift its resources towards more productive sectors, of which these can be efficiently produced by the country. While the first issue is about the relative position of sectors in terms of productivity, the second issue is about the country’s efficient production capability. These two issues are to be discussed for Turkey via the Product Space literature, in the context of capability development.  Hausmann and Rodrik (2003) points out that although detecting the sectors which have potential to gain comparative advantage in a country is a difficult process; state can make a better assessment than firms. Lin (2010, 2013) emphasizes detection of the sectors as a responsibility for the state and suggests a selection method (Lin and Treichel, 2011). After the selection of sectors, state should implement sector specific policies because required structural transformation cannot be achieved via Washington Consensus policies (McMillan and Rodrik, 2011; Lin, 2013). As Gomory and Baumol (2000; 5) points; there is no single economic path which yields to the best interest of the country. The economic outcome would differ according to the existing capabilities and the choice of the country’s economic administration. In other words, the transformation of the output composition of the country, would differ according to the choice made through the sectors which have the potential of efficient production.  The literature about industrialization policy deals with the issue of how much to deviate from the current comparative advantage. The debate between Justin Lin and Ha Joon Chang, sheds light on the different views about industrial policy implementation in developing countries. The two distinct views have a common on South Korea’s achievements on the grounds of sector specific industrialization policies. Lin and Chang (2009; 496) emphasizes that, South Korea moved “along the ‘ladder’ of international division of labour has often been carried out in small, if rapid, steps”. Thus, to “take small and rapid steps”, a country should decide how much to deviate from its current Revealed Comparative Advantage (RCA).  Capability based approach focuses more on learning process, policy coordination issues and priorities qualitative side, and in that respect differentiates from growth discussions in which quantitative side is considered mostly (Ju et.al. 2009;26, Paus, 2012;116).

 

Full text

 

An Iterated Variable Neighborhood Search Algorithm for a Single-Machine Scheduling Problem with Periodic Maintenance and Sequence-Dependent Setup Times

Dr. Chun-Lung Chen, Takming University of Science and Technology, Taiwan (R.O.C.)

 

ABSTRACT

We consider the scheduling problems in a single machine with periodic maintenance and sequence-dependent setup times.  The objective is to minimize the total weighted tardiness of the problem.  The problem considered in the paper is a NP-hard in a strong sense.  It requires much computation time to find the optimal solution; therefore, heuristics are an acceptable practice for finding good solutions.  In this paper, an iterated variable neighborhood search algorithm is proposed to solve the problems.  In order to evaluate the performance of the proposed algorithm, several algorithms are examined on a set of 320 instances.  The results show the proposed algorithm performs effective.  In this research, an iterated variable neighborhood search algorithm is proposed to solve the problem of single machine scheduling with periodic maintenance and sequence-dependent setup times. The objective is to minimize the total weighted tardiness of the problem. For convenience, we refer to the proposed algorithm as IVNS.  The single machine scheduling problem does not necessarily involve a single machine; issues in a complicated machine environment, such as a single bottleneck (Gagne et al., 2002; Liao & Juan, 2007) or other complex scheduling issues, can also be fully reduced to single machine scheduling; for instance, a group of machines may be treated as a single machine (Al-Turki et al., 2001; Ying et al., 2009). In order to simplify the scheduling problems, the researchers in the past assumed that all the machines were available all the time in their studies, but it is not the case in real situation.  This unavailability is due to certain causes that result in the machine halt.  For example, routine maintenance or repair would limit the availability of the machine. In addition, companies nowadays emphasize the need for problem prevention and maintenance, so the machine are usually scheduled for periodical maintenance to make sure that the machines won’t fail and thus result in greater loss of production capacity. Therefore, it is necessary to consider the machine availability in the scheduling problems.  Currently, some researches also include machine availability into the scheduling problem consideration.  For example, Jabbarizadeh, Zandieh, and Talebi (2009) included machine availability in flexible flow lines scheduling problems, and proposed dispatch rules, johnson rule, genetic algorithm and simulated annealing to minimize the makespan. Pacheco, Ángel-Bello, and Álvarez (2013) proposed a multi-start tabu search algorithm to solve the same considered problem.  In this research, an iterated variable neighborhood search (IVNS) algorithm is proposed to solve the considered problem with the aim of minimizing the total weighted tardiness.  The proposed IVNS algorithm can be regarded as a variant of the VNS and can be classified as a neighborhood-based local search algorithm.  Mladenović and Hansen (1997) first developed the VNS heuristic and VNS is a relatively new neighborhood-based local search.  This heuristic searches the solution space using a set of predefined neighborhood structures.  The search escapes from local optima solutions by systematically changing the neighborhood structures.  In recent years also several production scheduling problems have been efficiently solved with VNS approaches.  The VNS algorithms or the variants of the VNS for solving single machine scheduling algorithms include Gupta and Smith (2006) use a VNS algorithm for single machine total tardiness scheduling with sequence-dependent setups.  Lin and Ying (2008) propose a hybrid Tabu-VNS metaheuristic approach for single-machine tardiness problems with sequence-dependent setup times.  Kirlik & Oguz (2012) also consider the same single scheduling problem and a VNS is presented to solve the problem.  Liao and Cheng (2007) propose a VNS for minimizing single machine weighted earliness and tardiness with common due date. 

 

Full text

 

Analyzing Financial Time Series Using Monte Carlo Bayesian Approach

Dr. Jae J. Lee, State University of New York, New Paltz, NY

 

 

ABSTRACT

This paper explains how to analyze financial time series data using Bayesian inference with Monte Carlo Markov Chain (MCMC) algorithm. Many business and economic time series are parsimoniously modeled by Autoregressive Integrated Moving Average (ARIMA) model. Bayesian inference provides a systematic way to incorporate researcher’s prior knowledge in the analysis of data and provides a sequential way to update analysis given new data. Rather than repeated sampling paradigm, its paradigm is to treat the unknown entities as a random vector and to derive a posterior probability density for the random vector. Summary of the random vector is usually based on random draws of the posterior probability density. MCMC algorithm helps generate random draws of the posterior probability density that doesn’t have an analytical form from which random draws are easily obtained. In this paper, several ARIMA models are modeled using simulated data. Prior density and posterior density of parameters of each ARIMA model are obtained by Bayesian inference.  A random walk Metropolis and Hastings algorithm is used to generate random draws of posterior density. Random draws are used to summarize characteristics of parameters of ARIMA models. Some convergence diagnostics of MCMC approach are discussed.  A business and economics time series is a stationary if the joint distribution of a time series is not affected by a change of time origin. If a time series shows a stationary pattern, autoregressive (AR), moving average (MA) or mixed (ARMA) model is very useful to model a stochastic structure that generates the series. However, many business and economics time series do not show the stationary pattern. A particular nonstationary pattern is a homogeneous nonstationary that is homogeneous except in level and/or slope. Such behavior can be modeled using autoregressive integrated moving average (ARIMA). ARIMA is a stochastic model for which the exponentially weighted moving average forecast yields minimum mean square error (Box et al., 1994). Homogeneous nonstationary is removed after allowing for some differences of time series data. Bayesian inference is conditional on a prior knowledge about unknown entities and an observed data. It provides a systematic way to incorporate researcher’s prior knowledge in the analysis of data. Once new data are observed, it provides a sequential way to update prior beliefs and to add additional information. It also naturally deals with conditioning and marginalizing any nuisance variables. Augmenting nuisance variables speed up computations.  In addition, Bayesian inference accounts for both parameter uncertainty and model uncertainty using Bayes factor for each model entertained. For time series context, it provides a predictive distribution of data that is required for forecasting. Main framework of Bayesian inference is to treat the unknown entities as a random vector of variables and to derive a posterior probability density for the random vector given any source of prior information and an observed data. Inferential summaries of the unknown entities are usually based on random drawings of a posterior probability density.  Often random draws directly from a posterior density are not feasible since the posterior probability density is not one for which a set of random draws is generated directly.

 

Full text

 

Development of Marketing Capabilities Along the Life Cycle of the Firm

Katharina Buttenberg, University of Latvia

 

ABSTRACT

Marketing capabilities have gained a lot of interest in resource-based theory literature in the last decade. Customer- and Brand-oriented marketing capabilities have been identified as one of the key capabilities for business performance. Therefore, these capabilities have to be acquired and developed at a very early stage in the firm. The purpose of this paper is to identify the specific challenges firms have to face in the development of capabilities during their life cycle, specifically marketing capabilities. The approach is a literature review. For the analysis, the author draws on the literature of the resource-based theory for marketing capabilities and life cycle theory. Key findings are that young firms have to specifically establish marketing capabilities to be successful in terms of business performance and later on they have to further develop these capabilities. Since the development of capabilities in young firms very often is an unstructured process, practical implications of this paper prompt that a structured process for the development of marketing capabilities should be established to ensure successful future development. This is a theoretical paper and includes the findings of the literature analysis of the resource-based theory on marketing capabilities in connection to business performance and the life cycle theory on capability-development, as well as findings and suggestions for future steps in empirical research. The resource-based theory (RBT) is based on the theoretical approach that a firm can gain competitive advantage by acquiring a unique set of resources (Barney, 1991). Amit and Schoemaker evolved the concept of resources by introducing capabilities, which are firm-specific processes, developed over time. (Amit & Schoemaker, 1993, p. 35) To develop these capabilities and benefit from their full potential, firms and their managers must carefully pick, manage, monitor and sometimes shed them. (Sirmon & Hitt, 2003, pp. 344–348) During the company life cycle, different capabilities need to be developed to create sustainable competitive advantage (Helfat & Peteraf, 2003, p. 1000). Especially in the first ten years, where the capabilities need to be grouped and assigned and the objectives need to be set, the acquisition and development of the main capabilities is crucial (e.g. Miller & Friesen, 1984, pp. 1162–1163). These capabilities also include marketing capabilities, which have grown in interest in the resource-based theory literature (Kozlenkova, Samaha, & Palmatier, 2014, p. 1). In the last century, the role of marketing has changed from a transaction-based formative discipline to a brand-based approach (Vargo & Lusch, 2004, pp. 2–8). This new role is including the relationship between the inside-out (brand-oriented) view and the outside-in (consumer-oriented) view. Since marketing capabilities hold a central position in the firm and are central to the performance of the firm, they are important to develop at the early stages of the firm, but also in later development (e.g. Kozlenkova et al., 2014, pp. 2–4). Therefore, a closer investigation of the development of marketing capabilities during the life cycle of the firm is a topic to be investigated.  As mentioned above, there has been a paradigm-shift in marketing. The previous sole focus on the customer has been shifted to a focus on the brand as the center of marketing. (Urde, Baumgarth, & Merrilees, 2013, p. 14) However, customer-orientation is crucial for the development of a profitable enterprise. (Deshpandé, Farley, & Webster Jr, Frederick E., 1993, p. 27) So firms are facing the challenge of incorporating and integrating the customer-oriented and the brand-oriented view to provide strong sustainable economic value, which is typically half of the market capitalization of a firm (Kotler, 2009, p. 446). Therefore, already young organizations need to develop marketing capabilities enabling them to support both views. "The ultimate goal of the marketing function within a firm can be defined as increasing the value of the market-based assets of the firm." (Shervani, 2010, p. 1) To fully benefit from brands the sources and effects of market-based assets as well as their change over time is important to understand (Kotler, 2009, p. 446).

 

Full text

 

How Idol Admiration Affects Audience's Willingness to Watch Broadcasts of Japanese Professional Baseball Games: A Case Study of Taiwanese Baseball Players in Japan

Dr. Yu-Chih Lo, National Chin-Yi University of Technology, Taiwan

Dr. Tu-Kuang Ho, Taiwan Hospitality & Tourism University, Taiwan

 

ABSTRACT

Professional baseball has been very popular in Taiwan. As more Taiwanese baseball players are scouted and signed by overseas professional baseball organizations, these overseas professional baseball leagues with Taiwanese players have attracted more audience in Taiwan. The study aimed at exploring Taiwanese baseball audience’s willingness toward broadcasted Japanese professional baseball games (NPB), subjective norms, perceived behavior control, and idol admiration, these factors’ effects on behavior intent. For the study, the researchers utilized purposive sampling and administered 310 questionnaires in total. After filtering 10 invalid questionnaires, the study collected 300 valid questionnaires, yielding a 96.8 percent survey response rate. In terms of data analysis, the researchers first processed demographic variables with descriptive statistics in SPSS 20.0, followed by multivariate analysis and model rationality validation, which were further analyzed as measurement model and structure model, in AMOS 20.0. The results found that firstly, audience’s willingness toward broadcasted NPB games, perceived behavior control, and idol admiration, have significant influence on behavior intent. Whereas, for the audience’s subjective norms toward broadcasted NPB games, the study found no significant impact on behavior intent. In conclusion, based on the findings, the researchers then made recommendations for future studies on idol admiration and spectator behavior in sports.  In recent years, sports activities have increasingly become professionalized. Famous baseball players from Taiwan have been recognized and valued by baseball teams in Japan. At present, in this regular season of Japanese professional baseball (Nippon Professional Baseball, NPB), a total of seven Taiwanese baseball players are on the roster for various baseball teams. Broadcasting companies in Taiwan have also purchased broadcast rights from these teams to broadcast the NPB games. The professional baseball league in Taiwan and various professional baseball teams have to consider how they can use mass media (such as television and Internet broadcasting) to enhance idol admiration, increase their profit and attract more sports fans to watch the games of this professional sport. Against such a background, this paper studied the audiences of professional baseball games in both Taiwan and Japan and applied Ajzen's (1985; 1991) Theory of Planned Behavior to explore how sports fans’ idolization of popular baseball players influences their intention to watch professional baseball games. It is hoped that the findings of this study can be used by governmental agencies, baseball leagues and baseball teams as reference in future decision-making processes to help them draft policies on professional sports and in choosing marketing strategies.  The Theory of Planned Behavior (TPB) (Ajzen, 1985; 1991) was developed based on the Theory of Reasoned Action (TRA) (Fishbein & Ajzen, 1975). TRA suggests that a person’s behavior is influenced by rational considerations. Building on the expectancy-value model proposed previously by the researchers, TRA points out that a person’s behavior is determined by his/her intention to perform a certain behavior. TRA can reflect the person’s intention and expectancy to perform a specific behavior, and it can also be used to predict if the person will in fact perform the behavior. Fishbein and Ajzen (1975) also point out two factors for the formation of behavioral intention: an individual’s attitude towards a certain behavior and the relevant subjective norms formed under social pressure.

 

Full text

 

 Entrepreneurship, Innovation and Organic Growth within Vertical Software Firms

James Simak, Jacksonville University, Florida

Steven T. Kelley, Jacksonville University, Florida

Dr. Vikas Agrawal, Jacksonville University, Florida

 

INTRODUCTION

Growth in competitive industries is often pursued through mergers, acquisitions and consolidations, frequently with less than desirable, lasting results as the outcome. However, a larger balance sheet or increased revenues are initially certain, providing organizations with confidence that the growth objectives will be met.  In contrast, organic growth is abstract and uncertain, historically pursued by development of strategic competitive advantage through superior marketing efforts refining or redefining product, place, price and promotion to gain market share.  As an alternative, a growing number of theories and models have been developed around the importance of risk taking through entrepreneurship and innovation as the principal method of achieving long term, sustainable growth.  This study investigates factors identified by senior managers as contributory to entrepreneurship and innovation within diversified, established vertical software firms and tests hypotheses related to such factors for growth and success of the firm.  Further, this study attempts to determine if sustained organic growth of the firm must include innovation and entrepreneurship as fundamental competencies.  Interviews were conducted with senior leaders responsible for overall business unit results including sales and marketing, operations, product development and competitive strategy in niche software vertical markets.  Confirmatory empirical data and research findings are presented that test hypotheses of underlying relationships of key factors as drivers or barriers to innovation and related organic growth within the firm.  Drucker (1954) declared that business has only two basic functions, marketing and innovation.  This perspective of innovation as a critical business function has endured more than sixty years and suggests entrepreneurship and innovation provide competitive advantage for growing and sustaining business (Crossan & Apaydin, 2010).   Porter (1996) advocates further that organizations must innovate to be competitive over the long-term. Technology-oriented firms including vertical software suppliers compete in rapidly changing environments with competitive forces requiring product enhancement and new product development to maintain market share and sustain growth.  The problem explored by this research centers on organic growth of the firm achieved organically through entrepreneurship and innovation, forgoing reliance on mergers and acquisitions for development of new products, territories and clients.  Latent variables of innovation and entrepreneurship have been the subject of extensive research covering a wide range of disciplines including economic, psychological, social, cultural, and organizational perspectives.  This explorative research will be limited to considering innovation and entrepreneurship related to expected economic benefits within established and ongoing business enterprises, specifically within established software firms. Schumpeter defined innovation simply as “doing things differently” and stressed the importance of novelty in terms of products and processes within the firm (Tzeng, 2009).  Burgelman (1983) defined “internal corporate venturing” to create new businesses within an established firm.  Damanpour (1987) categorized innovation as being one of a radical, incremental, product, process, administrative or technical endeavor.

 

Full text

 

 Effect of Deferred Tax Reporting – Case of Publicly Traded Companies in Czech Republic

Dr. Hana Bohusova, Mendel University in Brno, Czech Republic

Dr. Patrik Svoboda, Mendel University in Brno, Czech Republic

 

ABSTRACT

The reporting of deferred tax is an instrument for distributable profit or loss regulation in a form of an accrual or a deferral. The research aimed at deferred tax in European companies is very limited. The majority of studies carried out in this issue concerns firms incorporated in the USA and covers period beginning in 1994. The contribution to the current research in this issue is that the research is concerned to non US companies reporting according to IFRS. The structure of deferred tax category of publicly traded joint-stock companies in the Czech Republic and its impact on financial analysis ratios are subjects of the research. According to information of Prague Stock Exchange (2016), there were 24 publicly traded companies trading their stocks on Prague Stock Exchange in researched period in total. The financial institutions (5) were excluded from the research. Additional 5 companies were excluded due to incompletely information provided. The research is built on results of the authors´ previous research. The processed data were obtained from annual report of the companies.  The materiality of deferred tax category within our sample was examined and details on the most significant components of temporary differences were presented. The relation between deferred tax expense and the total corporate income tax expense in the period and the relation between deferred tax changes and EBIT and EAT were tested.  According to CreditRiskMonitor (2016), there are 73.458 parent entities traded on regulated capital markets over the world. They are covering $49 trillion of revenue worldwide which represents 70% of world GDP. Given the importance of financial information provided to the external users (mainly to investors and providers of financial resources), it is necessary to present such an information in a fair view. To meet these requirements, the reporting in accordance with the generally accepted financial reporting system - US GAAP or IFRS is necessary. The Regulation (EC) No. 1606/2002 in the EU requires publicly traded companies governed by the law of a Member State under certain conditions prepare their consolidated accounts in conformity with International Financial Reporting Standards for each financial year starting on or after 1 January 2005.  These companies represent less than 1% of the total number of companies operating on the Internal Market. Despite this fact, they represent 33.5% of jobs in business entities and according to EC (2013) contribute to the indicator Value Added at Factor Costs. It is quite obvious, that listed companies represent significant share of corporate tax bases contributing by corporate income tax into the state budget. On the other hand, the publicly traded companies represent significant possibility for investment. The true and fair information on financial position and performance are demanded by both - current or potential investors. The information is provided by financial statements therefore it is necessary to take into account also the relationship between financial reporting and income tax rules which differ in individual countries. It means that the gross profit or loss reported to users of financial statements could differ from corporate tax base due to different rules in individual countries. Trying to measure the relation between corporate taxation rules and accounting rules, it is necessary to investigate their objectives.

 

Full text

 

Mitigating Risk from Railcar Bearing Failures: A Predictive Model for Identifying Failures

Dr. Vikas Agrawal, Jacksonville University, FL

Kimberly Bynum, Jacksonville University, FL

John Jinkner, Jacksonville University, FL

Frank Lombardo, Jacksonville University, FL

 

ABSTRACT

Previous research on accident rates for trains has shown that, when trains are traveling above 25 miles per hour, the main cause of accidents is equipment failure that, to a high degree, includes bearing failure. Using data collected from acoustic wayside defect detectors along railroad tracks, statistical analyses were conducted to build a model that will predict the percent probability of bearing failures. This information may be useful to better detect defective bearings before a failure, and to create maintenance schedules using predicted failure rates to maximize railroad safety and minimize maintenance costs.  Railroad companies use wayside detectors and automated analyzers to identify railcars and associated equipment that exhibit operating parameters which warrant repair or replacement. Three United States railroads (CSX, Union Pacific and Norfolk Southern) have partnered to develop the Joint Wayside Diagnostic System (JWDS). Although each railroad operates its own separate portion of the JWDS system, all data is fed into a single database, and this database is available for information exchange between the railroads. Equipment failure and/or car downtime prove expensive for railroad companies. Real-time condition monitoring and reporting provided by JWDS mitigates downtime and accidents, and therefore costs. The system identifies and prioritizes rail car conditions allowing inspectors to move from finders to fixers by proactively flagging real-time readings rather than waiting until after an equipment failure or derailment occurs. Data mining the JWDS database allows trends and patterns to be discovered early which may reduce equipment down time, and in extreme cases, may even save lives.  This paper explores a dataset from the JWDS database and creates a logistics regression model to predict deteriorating railcar equipment. Twelve months of data were remotely collected from an acoustic wayside defect detector in which railcar types and various bearing noises had been recorded, analyzed and categorized. Using logistics regression in SAS Enterprise Miner software resulted in the development of an empirical model that identifies the specific car types and noise component (type and level) relationships to predict the deterioration of bearings.  In the early days of railroading, overheated journals were a major safety issue. Journal boxes (bearings without rolling elements) contained lubrication, which often overheated resulting in a condition referred to as a “hotbox.” A hotbox could result in a burned-off bearing, which would ultimately lead to a train derailment. Back then, crews at the rear of the train were vigilant in looking for the smoke and smell associated with hotboxes. Modern railroad operations no longer use plain-bearing cars, but instead use the successor rolling element bearings, which can still be prone to occasional overheating. Likewise, hot wheels, often caused by sticking brakes, also remain a safety concern (McGonical, 2006).  In an effort to mitigate equipment failure due to overheated wheel bearings, wayside defect detectors, first developed in 1960’s, were employed by railroads to monitor railcars.

 

Full text

 

Market Reactions to the PricewaterhouseCoopers Merger

Chiawen Liu, National Taiwan University, Taiwan

Taychang Wang, National Taiwan University, Taiwan

Wan-Ting (Alexandra) Wu, University of Massachusetts Boston, MA

 

ABSTRACT

This paper examines the market reactions to the merger of Coopers & Lybrand (CL) and Price Waterhouse (PW) in 1997. The results show that, when the merger plan was announced, there are no significant abnormal returns for CL clients, PW clients, or clients of both accounting firms. Further analyses show that the market reactions to the merger plan are indifferent between firms with varying monitoring demand. Although the monitoring hypothesis is rejected, we find evidence consistent with the insurance hypothesis: financially-distressed clients have more positive abnormal returns around the date of announcement than financially-healthy clients. Such results imply that investors of a financially-distressed client expect more benefits from the merger of its accounting firm which enhances auditors’ insurance role against a corporate failure.  Merger and acquisition has been a corporate strategy to expand market share or improve company performance. Accounting profession is no exception. In 1989, Ernst & Young was formed by the merger of Ernst & Whinney and Arthur Young. In the same year, Deloitte, Haskins & Sells and Touche Ross merged and became Deloitte & Touche. In this merger wave, the Big 8 shrank to the Big 6. On September 18, 1997, Coopers & Lybrand (CL) and Price Waterhouse (PW), at the time the fifth- and the sixth-largest accounting firms in the U.S., announced to establish the world’s accounting giant, with combined annual fees of $11.8 billion in worldwide in 1996 and about 135,000 employees across the globe. The accomplishment of this merger created PricewaterhouseCoopers (PwC) on July 1, 1998 and further reduced the Big 6 accounting firms to the Big 5. The purpose of this paper is to study the market reactions to the announcement of the merger plan of CL and PW. More important, we examine how the market reaction ties to clients’ monitoring and insurance demands. Studies on audit clients’ stock price reactions have focused on the negative events of the accounting firms and more often than not find negative effects on clients’ stock prices. For example, Chaney and Philipich (2002) and Krishnamurthy et al. (2002) investigate the impact of the Andersen’s audit failure in Enron on Andersen’s non-Enron clients. Menon and Williams (1994) and Baber et al. (1995) examine the effect of Laventhol & Horwath bankruptcy on its clients. Franz et al. (1998) study the impact of litigation against audit firms on the firms’ non-litigating clients. In contrast to these studies, our paper examines market reaction to the merger of two Big 6 accounting firms, which is normally considered as a positive event. Since investors react asymmetrically to good news and bad news (McQueen et al. 1996), it is not clear whether we can simply invert prior studies’ findings on negative events of accounting firms for a positive event.  We rely on the monitoring hypothesis and the insurance hypothesis (Wallace 1980) to predict the market responses to the announcement of CL and PW merger plan. Under the monitoring hypothesis, if audit quality increases after the merger, as usually claimed by the merging firms, clients should receive more effective monitoring from auditors. Thus, auditees’ stock prices will respond positively to the merger announcement if stockholders expect future monitoring to be enhanced and raise their valuation of the auditees accordingly. Based on the insurance hypothesis, the merger increases the accounting firm’s funds available to settle litigation under audit failures. Since stock price is the present value of expected future cash flows, more indemnity secured from auditors for an audit failure implies a higher stock price.

 

Full text

 

Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals.  You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article:  jaabc1@aol.com

 

Contact us   *  Publication Policy   *   About us 

Copyright 2000-2018. All Rights Reserved