Isabel Ermidal1, Idalete Dias2 and Filipa Pereira2, 1Department of English, Minho University, Braga, Portugal, 2Department of German, Minho University, Braga, Portugal, 3Department of Informatics, Minho University, Braga, Portugal
The complexities of Natural Language Processing have become more challenging in recent years, given the rapid spread of online comment forums where abusive, violent and hate-laden behaviour often smears otherwise democratic and free conversations. This paper aims to make a contribution to the detection of hate speech in social media. Given the polarity-centeredness of sentiment classification methods and the difficulties facing automatic emotion detection due to the linguistic and paralinguistic properties of usergenerated content, not to mention the hardships of context-dependency, we propose a mixed-method approach that combines opinion mining and emotion detection with linguistic input. We applied our model on a subset of the NetLang Corpus, namely texts classified under the prejudice type “Racism” and sociolinguistic variable “Ethnicity”. We departed from the hypothesis that adversative conjunctions are markers indicative of opinion conflict and emotional discord, two phenomena characteristic of hate speech. Firstly, we narrowed down the search results containing the conjunction “but” using regular expressions and further restricted the search to instances of “but” co-occurring with “not”. Secondly, sentiment polarity followed by emotion classification were carried out using SentiWordNet and NRC Lexicon respectively. Finally, the resulting comments underwent a qualitative categorization according to their illocutionary force.
Opinion Mining, Sentiment Analysis, NLP, Hate Speech Detection, Social Media, Online Discourse, Corpus, Adversative Constructions, Illocutionary Acts.
Yilan Zhao1 and Yu Sun2, 1Irvine High School, 4321 Walnut Ave, Irvine, CA 92604, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620
As adolescent suicide rates grew significantly in the past decade, depression, anxiety, and other mental disorderswere largely held responsible for the growth . However, these medical conditions are often overlooked duringtheir early stages where symptoms are still remediable. Delayed or inattentive response to address the issue usuallyresults in higher suicides rates or in lesser cases, mental ailments carried into adulthood. In an attempt to remedythe mental health crisis, countless mental health interventions are being introduced as means to mitigate thecircumstances. In this project, we developed a mobile application that serves as a comprehensive therapy—journal and group therapy—for those struggling with mild to moderate depressive symptoms . The application utilizesboth the Sentimental AI and natural language processing in its backend server to generate accurate matches of users who share similar struggles, allowing users to connect and resonate with each other emotionally . Theapplication also provides a private and safe space for users to openly express their thoughts, alleviating their stressthrough daily journal entries.
Machine learning, Flutter, Adolescent Mental Health, Depression.
Sarthak Agrawal, Saksham Sharma and Surjeet Balhara, Department of Electronics and Communication Engineering, Bharti Vidyapeeth College of Engineering, New Delhi, India
With time demand of IoT devices is increasing day by day. Usage and production of the IoT devices has increased in recent years. With increase in the number of user base, security related issues are also increased. While there are many proposed approaches to deal with different security related aspects in IoT, one of the potential solutions to such issues is Blockchain. Blockchain is a rapidly emerging technology and is used in various fields. Blockchain technology has features like decentralisation and immutability which guarantees security. A blockchain based security model has been proposed in this paper for securing IoT devices from various security threats. Finally, proposed approach and its implementation using blockchain to secure IoT Ecosystem is discussed to make IoT ecosystem more secure.
Authentication, Blockchain, Data Protection, IoT, Security.
Geetika Tiwari1, Ruchi Jain2 and Dr Tryambak Hiwarkar3, 1Department of Computer Science, Sardar Patel University, Madhya Pradesh, India, 2Department of Computer Science, LNCT, Madhya Pradesh, India, 3Department of Computer Science, Sardar Patel University, Madhya Pradesh, India
Cloud computing has been promoted as one of the most effective methods of hosting and delivering services via the internet. Despite its broad range of applications, cloud security remains a serious worry for cloud computing. Many secure solutions have been developed to safeguard communication in such environments, the majority of which are based on attack signatures. These systems are often ineffective in detecting all forms of threats. A machine learning approach was recently presented. This implies that if the training set lacks sufficient instances in a specific class, the judgment may be incorrect. In this research, we present a novel firewall mechanism for safe cloud computing environments called machine learning system. Proposed Methods identifies and classifies incoming traffic packets using a novel combination methodology named most frequent decision, in which the nodes one previous decisions are coupled with the machine learning algorithms current decision to estimate the final attack category classification. This method improves learning performance as well as system correctness. UNSW-NB-15, a publicly accessible dataset, is utilised to derive our findings. Our data demonstrate that it enhances anomaly detection by 97.68 percent.
Cloud computing, Intrusion Detection System, Machine Learning, UNSW-NB-15.
Fredrik Kamphuis1, Bernardo Magri2, Ricky Lamberty1 and Sebastian Faust3, 1Corporate Research, Robert Bosch GmbH, 2The University of Manchester, 3Technical University of Darmstadt
In public transaction ledgers such as Bitcoin and Ethereum, it is generally assumed that miners do not have any preference on the contents of the transactions they include, such that miners eventually include all transactions they receive. However, Daian et al. S&P20 showed that in practice this is not the case, and the so called miner extractable value can dramatically increase miners prot by re-ordering, delaying or even suppressing transactions. Consequently an \unpopular" transaction might never be included in the ledger if miners decide to suppress it, making, e.g., the standard liveness property of transaction ledgers (Garay et al. Eurocrypt15) impossible to be guaranteed in this setting. In this work, we formally de ne the setting where miners of a transaction ledger are dictatorial, i.e., their transaction selection and ordering process is driven by their individual preferences on the transactions contents. To this end, we integrate dictatorial miners into the transaction ledger model of Garay et al. by replacing honest miners with dictatorial ones. Next, we introduce a new property for a transaction ledger protocol that we call content preference robustness (CPR). This property ensures rational liveness, which guarantees inclusion of transactions even when miners are dictatorial, and it provides rational transaction order preservation which ensures that no dictatorial miner can improve its utility by altering the order of received candidate transactions. We show that a transaction ledger protocol can achieve CPR if miners cannot obtain a-priori knowledge of the content of the transactions. Finally, we provide a generic compiler based on time-lock puzzles that transforms any robust transaction ledger protocol into a CPR ledger.
blockchain, liveness, censorship, rational adversary, miner extractable value.
Peifang Ni, TCA Laboratory, Institute of Software, Chinese Academy of Sciences, China
A contract called the Zero-Knowledge Contingent Payment presents how Bitcoin contracts can provide a solution for the so-called fair exchange problem. Banasik, W. et al. first presented an efficient Zero-Knowledge Contingent Payment protocol, for a large class of NP-relations, which is a protocol for selling the witness. It obtains fairness in the following sense: if the seller aborts the protocol without broadcasting the final message then the buyer finally gets his payment back. However, we find that the seller in the protocol could refuse to broadcast the final signature of the transaction without any compensation for the buyer. As a result, the buyer cannot get the witness from the final signature of the transaction and has the payment for the witness locked until finishing the large computation for a secret signing key. In this paper, we fix this problem by augmenting the efficient Zero-Knowledge Contingent Payment protocol. We present a new protocol where the seller needs to provide the deposit before the zero-knowledge proof of knowledge of the witness being sold. And then the buyer could obtain the sellers witness if the seller broadcasts the final signature of the transaction and gets the payment and his deposit. Otherwise, the buyer could get back the payment and obtain the sellers deposit. This new augmented protocol is constructed without any new assumptions.
fair exchange, Bitcoin, zero-knowledge, without scripts.
Peifang Ni, TCA Laboratory, Institute of Software, Chinese Academy of Sciences
The electronic cash was invented by Chaum in 1982 and now many e-cash schemes have been proposed in order to mimic the flat currency. Bitcoin provides us with an attractive way to construct a decentralized e-cash system that does not depend on any single party. Ideally, we would like to make the system more practical, for example, the users can be able to transfer coins between each other multiple times and they can also withdraw arbitrary amount of coins rather than one or the prede ned number, so that in the spend protocol the user can spend any amount of valid coins. In this paper, we propose a provably secure and more practical e-cash system. Firstly, it can provide the anonymous transfer of coins between users, so that the merchant can spend the received coins further; sec-ondly, the user can withdraw arbitrary amount of coins rather than the one or predefined number; thirdly, during the transfer of coins, the coins have a fixed size; finally, the fair exchange between the users can also be achieved.
E-Cash, transitivity, anonymity, divisibility, preventing double-spending, fair exchange.
Felix Hoffmann, Department of Computer Science and Mathematics, Goethe University, Frankfurt, Germany
Proof-of-Work is a popular blockchain consensus algorithm that is used in cryptocurrencies like Bitcoin in which hashing operations are repeated until the resulting hash has certain properties. This approach uses lots of computational power and energy for the sole purpose of securing the blockchain. In order to not waste energy on hashing operations that do not have any other purpose than enabling consensus between nodes and therefore securing the blockchain, Proof-of-Useful-Work is an alternative approach which aims to replace excessive usage of hash functions with tasks that bring additional real-world benefit, e.g. supporting scientific experiments that rely on computationally heavy simulations. This publication consists of two parts: In the first part, important properties of conventional hash-based PoW are described. In the second part, theoretical PoUW concepts such as Coinami, CoinAI and the cryptocurrency Primecoin are analyzed with respects to how PoW properties can be retained while doing useful work.
Blockchain, Consensus Algorithm, Proof-of-Work, Proof-of-Useful-Work.
Ashitosh Joshi1 and Surendra Bhosale2, 1M.Tech student of Department of Electrical Engineering, Veermata Jijabai Technological Institute, Mumbai, India, 2Faculty and Head of Department of Electrical Engineering, Veermata Jijabai Technological Institute, Mumbai, India
This paper proposes a very efficient method for magnetic signature reduction of underwater vessels commonly known as degaussing. Degaussing helps to protect the ferromagnetic vessels from magnetic anomaly detectors and mines and hence ensures stealth mode of operation. We propose a reinforcement learning (RL) based approach for degaussing of the vessel. The proposed algorithm is efficient in terms of computational efforts, speed, and accuracy. The proposed method is validated for a simulated model of prototype submarine as a ferromagnetic vessel. The main advantage of the proposed method is its ability to automatically find the optimal values of currents to be applied for signature reduction.
Degaussing, Magnetic Signatures, Reinforcement Learning, Q Learning.
Nicolas Galván-Alvarez1, David Rojas-Casadiego1, David Romo-Bucheli1 and Viatcheslav Kafarov2, 1School of Systems and Computer Engineering, Universidad Industrial de Santander, Bucaramanga, Santander, Colombia, 2School of Chemical Engineering, Universidad Industrial de Santander, Bucaramanga, Santander, Colombia
Biomedical waste generation is severely affected by generalised sanitary emergencies such as epidemics, as shown recently during the COVID-19 pandemic. These sanitary emergencies often induce a plastic use increase in personal protection items, single-use plastics, and other healthcare elements. This increase might surpass the capacity of the waste management mechanism of a specific region, leading to a potential increase in its population health risks. Predicting the trends of biomedical waste generation is not straightforward because it depends on several variables associated with the local health system and the health emergency status. However, a substantial amount of work has been done in epidemics modelling. Our main hypothesis is that biomedical waste generation is strongly associated with sanitary emergencies dynamics. We propose a simulation framework that uses historical data from an ongoing sanitary emergency to build a model that can predict biomedical waste generation trends in urban regions of developing countries.
Biomedical waste, Simulation model, Epidemics, Neural networks, Developing countries, COVID-19.
Lei Miao1, Hongbo Zhang2, and Dingde Jiang3, 1Dept. of Engineering Technology, Middle Tennessee State University, Murfreesboro, TN 37132, USA, 2Dept. of Engineering Technology, Middle Tennessee State University, Murfreesboro, TN 37132, USA, 3School of Astronautics & Aeronautic, University of Electronic Science and Technology of China, Sichuan, China
Wireless secret sharing is crucial to information security in the era of Internet of Things. One method is to utilize the effect of the randomness of the wireless channel in the data link layer to generate the common secret between two legitimate users Alice and Bob. This paper studies this secret sharing mechanism from the perspective of game theory. In particular, we formulate a non-cooperative zero-sum game between the legitimate users and an eavesdropper Eve. In a symmetrical game where Eve has the same probability of successfully receiving a packet from Alice and Bob when the transmission distance is the same, we show that both pure and mixed strategy Nash equilibria exist. In an asymmetric game where Eve has different probabilities of successfully receiving a packet from Alice and Bob, a pure strategy may not exist; in this case, we show how a mixed strategy Nash equilibrium can be found.
secret sharing, wireless communications, game theory, Nash equilibrium.
Azmat Khan, Michael Carrington, Hafiz Farooq and Abdulrahman Alyahya, Expec Computer Center (ECC), Saudi Aramco, Dhahran, Saudi Arabia
Upstream Geological and Geoscience Application environment typically finds their users distributed across regions. Managing these critical applications efficiently with millions of inbound requests in an Enterprise IP-Fabric Data Center is an emerging challenge. Therefore, a resilient load balancing architecture is needed to distribute the dynamic workload equally for the application infrastructure across IP-Fabric and Cloud-Computing data centers. Commercial load balancers play a key role to implement an Efficient and Secure Load Balancing Architecture to ensure availability, high-throughput, disaster recovery and resource utilization. In this paper, we propose an “Efficient Load Balancing Architecture” to handle complex Upstream Oil & Gas Applications, supported by empirical and statistical evaluations. This paper will also show that the proposed load balancing framework is more reliable and robust than the conventional approach and achieves better performance in a large-scale application environment.
Load Balancing, Upstream Applications, Web Applications, Real-Time, Oil & Gas, Geological and Geo Science Application, Data Center IP-Fabric, Cloud Computing, Application Security.
Aleksander Berezowski, Department of Software Engineering, University of Calgary, Calgary, Canada
This paper will cover how I designed a reward function algorithm for a miniature race car’s reinforcement machine learning model. The research presented focuses on how to develop a reward function for AWS’s DeepRacer competition. Highlights include how different mathematical methods can be used to weigh different reward parameters, how reward function parameters are chosen, a complete breakdown of the code my research led me to make, the performance of my research, and how I would improve results going forward. This paper is titled as a research paper as it is the culmination of research, testing, and analysis done on one approach to this problem. The reason for this research is when I started to compete in DeepRacer there were no papers that broke down the rationale behind top performing code. This paper presents the process of building an experimental program, testing it, and figuring out how to improve it.
AWS DeepRacer, Reinforcement Learning, Competitive Programming.
Asif Imran, Department of CSIS California State University San Marcos CA 92096, USA
Software design debt aims to elucidate the rectification attempts of the present design flaws and studies the influence of those to the cost and time of the software. Design smells are a key cause of incurring design debt. Although the activities of design smell identification and measurement are predominantly considered in current literature, those which identify and communicate which design smells occur more frequently in newly developing software and which ones are more dominant in established software, lack appropriate approaches. This research describes a mechanism of identifying the design smells which are more prevalent in software. It narrows down the focus of design debt to smells and depends on the appraisal of basic design best practices. A tool is provided here which is used for design smell detection by analyzing large volume of source codes. More specifically, 164,609 Lines of Code (LoC) and 5,712 class files of six developing and 244,930 LoC and 12,048 class files of five established open source Java software are analyzed here. Obtained results show that out of the 4,020 detection of smells which were made for nine pre-selected types of design smells, 1,643 design smells were detected for developing software, which mainly consisted of four specific types of smells. For established software, 2,397 design smells were observed which predominantly consisted of four other types of smells. The remaining design smell was equally prevalent in both developing and established software. Desirable precision values ranging from 72.9% to 84.1% were obtained for the tool. Software engineers can use this approach to form a subset of the most critical design smells which occur in their specific software, and focus on solving only those rather than considering all smells. As a result, the gained information will help the software engineers to take necessary steps of design remediation actions.
design smell detection, software maintenance, design debt, software engineering.
Feihong Liu1 and Yu Sun2, 1Crean Lutheran High School, 12500 Sand Canyon Ave, Irvine, CA 92618, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
Athletes in technical sports often find it difficult to analyze their own technique while they’re playing . Often, athletes look at the technique of professional players to identify problems they may have. Unfortunately, many types of techniques, such as forehand and backhand swings in tennis, are relatively similar between a beginner and a professional, making it more difficult for comparison. On the other hand, techniques that appear different between professionals and casual can also present different challenges. This is especially true for serves in tennis, where the speed of the swing, the motion of the player, and the angle of the camera recording the player all pose a challenge in analyzing differences between professional and learning tennis players . In this paper, we used two machine learning approaches to compare the serves of two players. In addition, we also developed a website that utilizes these approaches to allow for convenient access and a better experience. We found that our results adjusted for different speeds between the two players and made comparison much simpler.
Pose-estimation, Machine Learning, Scikit-learn.
Leheng Huang1 and Yu Sun2, 1Arcadia High School, 180 Campus Dr, Arcadia, CA 91006, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
The creation and sustainability of academic teams have long been unnecessarily difficult due to the exorbitant costs of purchasing and maintaining equipment . These costs serve as a major barrier, especially in poorer areas where securing the funds for this equipment is difficult . In addition, when the equipment eventually breaks, it is often difficult to repair, forcing academic teams to purchase a new set of equipment. This project attempts to provide a product that can drastically lower the equipments costs and allow the user to modify and repair it as necessary. This project resulted in the development of the Argo Buzzer System which was created with input from experienced academic team members and it has proven that it is comparable to modern buzzer systems for a fraction of the cost .
Electronics, Machine learning, IoT system.
Jonas Baschung and Farshideh Einsele, Section of Business Information, Bern University of Applied Sciences, Switzerland
Objective: The objective of the study was to link Swiss food consumption data with demographic data and 30 years of Swiss health data and apply data mining to discover critical food consumption patterns linked with 4 selected chronical diseases like alcohol abuse, blood pressure, cholesterol, and diabetes.
Design: Food consumption databases from a Swiss national survey menuCH were gathered along with data of large surveys of demographics and health data collected over 30 years from Swiss population conducted by Swiss Federal Office of Public Health (FOPH). These databases were integrated and Frequent Pattern Growth (FP-Growth) for the association rule mining was applied to the integrated database.
Results: This study applied data mining algorithm FP-Growth for association rule analysis. 36 association rules for the 4 investigated chronic diseases were found.
Conclusions: FP-Growth was successfully applied to gain promising rules showing food consumption patterns lined with lifestyle diseases and people’s demographics such as gender, age group and Body Mass Index (BMI). The rules show that men over 50 years consume more alcohol than women and are more at risk of high blood pressure consequently. Cholesterol and type 2 diabetes is found frequently in people older than 50 years with an unhealthy lifestyle like no exercise, no consumption of vegetables and hot meals and eating irregularly daily. The intake of supplementary food seems not to affect these 4 investigated chronic diseases
Data Mining, Association Analysis, Apriori Algorithm, Diet & Chronical Diseases, Health Informatics.
Bolin Gao1, Yu Sun2, 1Fairmont Preparatory Academy, 2200 W Sequoia Ave, Anaheim, CA92801, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620
As virtual reality technologies emerge, the ability to create immersive experiences visually drastically improved. However, in order to accompany the visual immersion, audio must also become more immersive . This is where3D audio comes in. 3D audio allows for the simulation of sounds from specific directions, allowing a more realisticfeeling . At the present moment, there lacks suf icient tools for users to design immersive audio experiences that fully exploit the abilities of 3D audio.
This paper proposes and implements the following systems :
1. Automatic separation of stems from the incoming audio file, or letting the user upload the stems themselves
2. A simulated environment in which the separated stems will be automatically placed in
3. A user interface in order to manipulate the simulated positions of the separated stems.
We applied our application to a few selected audio files in order to conduct a qualitative evaluation of our approach. The results show that our approach was able to successfully separate the stems and simulate a dimensional sound effect.
3D Audio, signal processing, Head Related Transfer Functions.
P. Ravikumaran1 K. Vimala Devi2 and K. Valarmathi3, 1Dept. of Computer Science and Engineering, Fatima Michael College of Engg & Tech, Madurai- 625020, Tamil Nadu, India, 2School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology, Vellore Campus, Vellore- 632014, India, 3Dept of ECE, P.S.R Engineering College, Sivakasi- 626140, Tamil Nadu, India
In recent times, medical information takes the form of an overwhelming amount of data that is difficult to sustain using traditional procedures. Precision medical data study focuses on understanding early time illness, patience health care centre, and providers, leading to the progress of big data in the medical and basic healthcare societies. It concentrates primarily on anticipating and discovering direct analysis of some of the substantial health effects that have increased in numerous countries. The existing health industry cannot retrieve detailed information from the chronic disease directory. The advancement of CKD (chronic kidney disease) and the methods used to identify the disease is a difficult task that can lower the cost of diagnosis. In this research, a modified MapReduce and pruning layer-based classification model using the deep belief network (DBN) and the dataset used as CKD were acquired from the UCI repository of machine learning. We have utilized the full potentiality of the DBNs by deploying deep learning methodology to establish better classification of the patients kidney. Finally, data will be trained and classified using the classification layer and the quality will be compared to the existing method.
Chronic kidney disease, deep belief neural network, MapReduce, Pruning layer.
Richard Lin1 and Yu Sun2, 1Phillips Academy Andover, 180 Main St, Andover, MA 01810, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620
In recent years, the world has been undergoing a drastic change in its age demographic due to an of balancecaused by decreasing birth rates and an increase in the elderly population . While 8.5% of the global population were elders in 2015, studies show that this number will hit 17% by 2050. This project will focus on the efficiencyof automatic fall detection and contribute to the evolution of fall protection , both within elders and the general population.
Machine Learning, Fall detection, Mobile APP.
Sandip Hodkhasa and Huiping Guo, California State University, Los Angeles, California, USA
Watermarking is extensively used in various media for data transfer, content authentication and integrity. The continuous flow of data is always vulnerable to tamper. This research proposes a new watermarking scheme that detects tampering in a stream of data. The stream of data is dynamically divided into different sized groups using synchronization points. A computed watermark is embedded in each group by hashing the concatenating the current group and the next group. A secondary watermark is generated based on the current group that prevents tampering from any attacks in the current group. Watermark verification table is used to determine all possible scenarios for false results. Experiments are performed to show its efficiency. False results decrease as the group size becomes larger. Random burst attacked requires larger group size. The scheme also shows with the increase in grouping parameter ‘m’ which defines the synchronization point, the false positive rate decreases.
Cryptography, Digital Watermarking, Hashing, Information Hiding, Tamper Detection.
Alex Xie1 and Yu Sun2, 1Chadwick School, 26800 S Academy Dr, Palos Verdes Peninsula, CA 90274, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
Currently, there is increasing participation in investment, especially in the stock market, as it appeals to more average citizens. One common roadblock these individuals receive is the lack of information about the economy, politics, regulation, etc., which could all affect the stock market. This app addresses this problem by collecting mass information from third-party social media. Information from social media platforms has its advantage mainly due to the citizen involvement and expertise some users may resemble. Pulling large amounts of data from these social media users avoids bias and establishes reliable sourcing. Using this information as a predictor, the app computes data and effectively predicts the performance of the stock for the next three days. By doing this, users of this app could easily get informed through instant quantitative prediction instead of having to read all over social media. This app also indirectly manages people’s wealth because it allows users to make smarter decisions, thus increasing their money potential. In certain areas, this app is able to perform the same duty as a licensed wealth manager.
Stock, Asset, Investment, Exchange.
Andy Kuang1 and Yu Sun2, 1Eleanor Roosevelt High School, 7447 Scholar Way, Eastvale, CA 92880, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
around the world, in which some students are still using online learning platforms today, and their guardians leaving them unattended to provide for their families . However, with the lack of supervision children are taking advantage of these times to perform unproductive activities such as gaming. During school days, there are also many breaks provided for students to relax and reset their mentality, which allows a student to be focused during class, but this doesn’t seem to be the case students are spending this time indoors after many hours of staring at a device, instead spending it outdoor can relax their eyes also preventing eye damage. This paper proposes software that tracks a students productivity based on their tennis racket movement and speed using a particle board, accelerometer, and tracker . With a tracker, guardians would be able to get constant updates on their childrens activities. We applied our application to a real-life scenario and conducted a qualitative evaluation of the approach. The results show that students spend less time indoors performing nonproductive activities, students spend more time outside playing their sport of desire, and parents are less stressed about their childrens educational and physical health.
Pandemic, C++ and HTML, Mobile APP.
Natarajan Meghanathan, Department of Electrical & Computer Engineering and Computer Science, Jackson State University, Jackson, MS, USA
Some students in the Computer Science and related majors excel very well in programming-related assignments, but not equally well in the theoretical assignments (that are not programming-based) and vice-versa. We refer to this as the "Theory vs. Programming Disparity (TPD)". In this paper, we propose a spectral analysis-based approach to quantify the TPD metric for any student in a course based on the percentage scores (considered as decimal values in the range of 0 to 1) of the student in the course assignments (that involves both theoretical and programming-based assignments). For the student whose TPD metric is to be determined: we compute a Difference Matrix of the scores in the assignments, wherein an entry (u, v) in the matrix is the absolute difference in the decimal percentage scores of the student in assignments u and v. We subject the Difference Matrix to spectral analysis and observe that the assignments could be partitioned to two disjoint sets wherein the assignments within each set have the decimal percentage scores closer to each other, and the assignments across the two sets have the decimal percentage scores relatively more different from each other. The TPD metric is computed based on the Euclidean distance between the tuples representing the actual numbers of theoretical and programming assignments vis-a-vis the number of theoretical and programming assignments in each of the two disjoint sets. The larger the TPD score (in a scale of 0 to 1), the greater the disparity and vice-versa.
Spectral Analysis, Theory vs. Programming Disparity, Eigenvector, Bipartivity.
Sukhamay Kundu, Dept of Computer Science and Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
The regression-line for a set of data-points pi = (xi, yi), 1 i N, lacks the rotation-property in the sense that if each pi is rotated by an angle around the origin then the regression-line does not rotate by the same angle except for the special case when all pi’s are collinear. This makes the regression-line unsuitable as a linear model of a set of data points for applications in data mining and machine learning. We present an alternative linear model that has the rotation property. In many ways, the new model is also more appealing intuitively as we show with examples. The computation of the new linear model takes the same O(N) time as that for the regression-line.
perpendicular distance, regression-line, rotation property, application to data mining.
Charles Huang1 and Yu Sun2, 1Richmond Christian School, 10260 No 5 Rd, Richmond, BC, Canada V7A 4E5, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
The surge in newly-developed software has become increasingly valuable; conventional and common tools that have once dominated in performing simple tasks are now challenged by far simpler, more accurate, and more versatile programs . Through the use of machine learning and AI, we developed an application in Flutter that identifies and measures the distance from the user to the object through a camera . Our application succeeds in measuring varying objects efficiently as well as objects that exceed the distance measurable by common measuring tools . In addition, at times when quick calculations are needed, utilizing an application on a phone yields faster results than opting for physical tools. An experiment was conducted to test the accuracy and practicality of our application as well as a survey for the clarity and ease of use of the application. The results indicate the application is clearly practical and easy to utilize but the accuracy of the application’s distance prediction has room for improvement.
Artificial intelligence, distance prediction, Mobile APP.
LijuanZhou1 and Danni Lv2, 1College of Cyberspace Security, Hainan University, Haikou, 570228, China, 2College of Information Engineering, Capital Normal University, Beijing, 100089, China
With the rapid development of Service-oriented computing (SOC), Web services have become the preferred technology for realizing Service-oriented computing problems and other related goals. How to find a cheap and high-quality Web service composition from a large number of Web services that provide the same function, that is, the research based on Quality of Service (QoS) is the most important problem in the Web service composition optimization model, which is also very important to improve the efficiency of the service composition. In this work, we use the intelligence optimization algorithms to search for the best combination of web services to achieve the functionality of the workflow’s tasks. And we propose a novel approach, called A Hybrid Optimized Multi-Population Flower Pollination Algorithm (AHOMFPA) to solve this problem. Empirical comparisons demonstrate AHOMFPA has advantages over other existing algorithms in efficiency and feasibility.
Web service composition, Quality of Service (QoS), Intelligence optimization algorithm, Flower Pollination Algorithm.