Skip to main content
Skip to main content


Name Investigator Tech ID Licensing Manager Name Micensing Manager Email Description Tags
Fast Dynamic Parallel Approximate Neighbor Search Data Structure Using Space Filling Curves Piyush Kumar 16-096 Matthieu Dumont <p>The nearest neighbor search (NNS) is a technique that is used in computing to optimize the amount of time it takes to accurately locate one data point in relation to another data point in a dataset that is organized so that distances between points are measured in Euclidian space. Increasingly, NNS computation is becoming a key sub-task in many algorithms and applications that are used to process, organize, cluster, learn, and understand massive data sets, such as those used in the automotive, aerospace, and geographic information system (GIS) industries.</p> <h2>The Problem:</h2> <p>The NNS algorithm works well for small data sets, but it is too time-consuming to implement with large data sets. The approximate nearest neighbor (ANN) search, an alternative to NNS, improves search time and saves memory by estimating the nearest neighbor, without guaranteeing that the actual nearest neighbor will be returned in every case. Two limitations of this method are that it is difficult to make an ANN algorithm dynamic (i.e., allows for insertions and deletions in the data structure) or to parallelize it (i.e., use multiple processors to speed up queries).</p> <h2>The Solution:</h2> <p>Dr. Kumar and his research team are developing a novel, practical, and theoretically-sound method that will solve the NNS problem in lower dimensional spaces. Specifically, the researchers are creating an approximate k-nearest neighbor algorithm, based on Morton Sorting of points, to create a software library for approximate nearest neighbor searches for Euclidian spaces. The library will use multi-core machines efficiently (parallel) and enable the insertion and deletion of points at run time (dynamic). This new algorithm delivers the search results with expected logarithmic query times that are competitive with or exceed Mount’s approximate nearest neighbor (ANN) search.</p> <h2>Advantages:</h2> <ul> <li>Speed on multicore machines</li> <li><span>Minimum spanning tree computation</span></li> </ul>
Method to Elucidate Molecular Structure from Momentum Transfer Cross Section Christian Bleiholder 17-008 Matthieu Dumont <p>Ion mobility spectrometry-mass spectrometry (IMS-MS) is ideally suited to study co-existing, transient conformations of proteins and their complexes related to diseases because of its high sensitivity and speed of MS analysis.</p> <p>Many existing results suggest that IMS-MS could accurately elucidate structures for these protein conformations in a high-throughput manner.</p> <p>The present technology identifies how protein tertiary structures can be determined from IMS-MS data in an automated manner.</p> <h2>Advantages:</h2> <ul> <li>IMS-MS requires a fraction of sample amounts and time</li> <li>Does not suffer from charge-state dependent protein dynamics in the gas phase</li> <li>Computationally efficient</li> <li>Automatized</li> </ul> <p><span>Click here to watch an interview with Dr. Bleiholder: <span class="fa fa-caret-square-o-right"></span><span class="fa fa-blind"></span><span class="fa fa-check-circle"></span><span class="fa fa-hand-o-right"></span><a href=""></a></span></p>
CNN Filters for Noise Estimation and Improved Denoising in Low-Light Noisy Images Adrian Barbu 17-019 Matthieu Dumont <p>The proposed invention is a system and method for training a Convolutional Neural Network (CNN) to predict a tuning parameter to be used in an existing image denoising method (called BM3D) to obtain best possible denoising results on images obtained by digital cameras in low-light conditions. The performance of the BM3D denoising algorithm varies with this tuning parameter.</p> <p>In this work we present a method to predict the best parameter value for each image patch and we observe that using this prediction we obtain better results than using a fixed parameter value for all images.</p> <p>There are many image denoising methods available today. However, they are trained and tested on artificial noise. According to our observations, when it comes to images corrupted by real low light noise the BM3D method works best. Our work takes the BM3D method and enhances it by predicting what its tuning parameter should be for each image patch being denoised.</p> <p>This technology could be directly sold to consumers in the form of an app or embedded in a mobile phone or digital camera.</p> <p> </p>
Reproducible Random Number Generation using Unpredictable Random Numbers Michael Mascagni 16-103 Matthieu Dumont <p>The use of random numbers in simulation is widespread, and is crucial in a large number of applications. In simulation, it is equally important that applications using random numbers are reproducible. The requirement of reproducibility is important for many reasons:</p> <ol> <li>Code development and debugging would be nearly impossible without reproducible random numbers</li> <li>Many simulation applications require absolute reproducibility in certification situations, such as those mandated by the Nuclear Regulator Commission</li> <li>Publication in many journals now has a code reproducible mandate, such as the ACM Transactions on Mathematical Software.</li> </ol> <p>This has led to many new, and very capable random numbers designed primarily for cryptographic use, and hence are unpredictable, to be deemed inadequate for simulation purposes. One such generator, the Intel digital random number generator (DRNG) is of particular note, and served as Dr. Mascagni’s inspiration.</p> <p>In the Scalable Parallel Random Number Generators (SPRNG) library that Dr. Mascagni developed, one has the capability to produce independent full-period random number streams based on parameterization. The parameter can be thought of as a very long integer, and SPRNG currently assigns parameters to steams. One can use an unpredictable RNG to produce the parameters in SPRNG, and by augmenting the SPRNG RNG data structure, this can be done in a reproducible way. The reproducibility will be of the so-called forensic type, and reproducing the results will require the use and design of extra software to collect the parameters used in a computation, and to stage a new computation with the same parameters.</p> <h2>Advantages:</h2> <ul> <li>All current Intel and AMD processors have an interface to the RdRAND function, which produces the unpredictable random values. Thus, this would provide a reproducible generator for a wide variety of machines, and would permit parallel and distributed computing without the need for message passing, as the Native RdRAND function can be used independently.</li> </ul>
Fast Compression and Estimation of the Channel State Information (CSI) with Sparse Sinusoid Approximation for Broadband Wireless Networks Zhenghao Zhang 16-079 Matthieu Dumont <p>CSIApx is a very simple algorithm for the compression of the Channel State Information (CSI) of OFDM systems. The algorithm is guided by rigid mathematical findings and has with bounded performance. It is very suitable to be implemented in hardware because it involves only a small number of complex multiplications, similar to that of a digital FIR filter. In the illustrated embodiment CSIApx has been extensively tested with both experimental data and the Wi-Fi channel model, and the results confirm that while dramatically reducing the computation complexity, CSIApx still significantly outperforms the existing solutions both in compression ratio and accuracy, in nearly all cases.</p> <p>Accordingly, the present invention provides an improved system and method for compressing the CSI for OFDM that is accurate and computationally easy to implement.</p> <p> </p> <h2>Applications:</h2> <ul> <li>Computer systems</li> </ul> <h2>Advantages:</h2> <ul> <li>Extremely simple</li> <li>Compressed CSI consists of small range of 5 or less complex numbers</li> <li>Easy to quantize and transmit</li> <li>Based on rigid mathematical foundations</li> <li>Resilient against the disturbance of noise</li> </ul>
Cashtags: Prevent Leaking Sensitive Information through Screen Display An-I Andy Wang 15-091 Dr. Matthieu Dumont <p>Mobile computing is the new norm. As people feel increasingly comfortable computing in public places such as coffee shops and transportation hubs, the risk of exposing sensitive information increases. While solutions exist to guard the communication channels used by mobile devices, the visual channel remains, to a significant degree, open. These solutions aim only to prevent the visual leakage of password entries. However, once the uses has been successfully authenticated, all accessed sensitive information is displayed in full view.  No existing mechanism allows arbitrary data to be marked as sensitive. Shoulder surfing is becoming a viable threat in a world where sensitive information from images can be extracted with modest computing power.</p> <p>In response, we present Cashtags: a system to defend against attacks on mobile devices based on visual observations. The system allows users to access sensitive information in public without the fear of visual leaks. This is accomplished by intercepting sensitive data elements before they are  displayed on screen, then replacing them with non-sensitive information. In addition, the system provides a means of computing with sensitive data in a non-observable way.</p>
Slip Mitigation Control for an Electric Powered Wheelchair Emmanuel Collins 14-060 Robby Freeborn-Scott <p>Electric Ground Vehicles (EGVs) such as electric automobiles, golf carts, and electric powered wheelchairs are increasing in use since they are energy efficient, environmentally friendly, and reduce oil dependency. However, when traveling across slippery surfaces, EGVs become susceptible to lateral slip.</p> <p>Our developed novel technology mitigates slip using feedback control. The essential components are the following: a reference model based on mass-damper system, a trajectory tracking controller for each wheel, and a maximum tractive force estimator. The reference model generates the desired acceleration, velocity, and position of the vehicle based on user inputs, for example, the position of the steering wheel and throttle or the commands from a joystick displacement. The user inputs are mapped to force and torque inputs to the reference model. The commanded trajectory is mapped to the desired wheel trajectories using the controller. The maximum tractive force estimator determines the minimum of the maximum tractive forces that can be applied to each wheel by the surface the wheel is traversing. An associated lower bound on the mass of the reference model is used to determine when one or more of the wheels has been required to follow a trajectory that requires more than the estimate of the min-max tractive force, such that it can be inferred that slip has occurred or may soon occur. Subsequently, the value of the mass parameter in the reference model is reduced to help ensure that future slip will not occur.</p>
Adaptive Nonlinear Model Predictive Control Using a Neural Network and Sampling Based Optimization Emmanuel G. Collins 14-086 Robby Freeborn-Scott <p>The model predictive control algorithm uses a nonlinear model, input domain sampling, and a graph search technique without dependence on gradients. The nonlinear model is obtained by using input and output data from the system to tune a neural network model. The initial neural network can be trained using open loop data. Once the predictive control is turned on, the neural network continually adapts to represent time varying changes in the system. This is the first approach to adaptive nonlinear model predictive control that simultaneously performs online adaptation and model predictive control without the calculations of gradients for the predictive control.</p> <p>This technology provides, in a single software package, a very general means of simultaneously identifying and controlling nonlinear systems without computing gradients, which leads to lower computational requirements than methods that are currently commercially available.</p> <p>The technique of sampling the input domain guarantees satisfaction of hard constraints on input commands. Multiple core processing will give the proposed method increasingly greater computational speed advantage over current alternative methods since parallel computing hardware continues to become more widespread and more capable.</p>
Voltage Profile Based Fault Detection Michael (Mischa) Steurer 13-147 Robby Freeborn-Scott <p>Fault location in a traditional power system is a challenging task. Electric power flows only in one direction: from the substation to the various loads. Therefore, when a severe short circuit fault occurs, there is a current rise with voltage sag near the faulted node or line and everything else that is downstream. If the fault protection system responds adequately it isolates the assumed faulted areas which are all the nearby and downstream customers of the actual faulted area.</p> <p>In a system containing distributed resources (DRs), most fault location technologies ignore the presence of DRs by assuming either low DRs penetration or no power injection from DRs during a fault. The few technologies that consider the presence of DRs have not considered a current limited system when a fault occurs.</p> <p>As the amount of local generation (PV, microturbines ... ) is increasing, the existing distribution systems fault location methods do not always apply because of various reasons including cost, complexity of the system due to mesh-like system topology, and bidirectional power flow. This FSU invention takes advantage of the system topology, the presence of the controllable voltage source convertors (VSCs), and the change of the voltage profile with the presence of the fault. Using the VSCs to help locate the fault will help overcome the issue of relying on the measured value of voltage when the voltage has completely collapsed in a section because of a fault in the distribution system. Instead of hindering the fault location process, the VSCs are used to help support the voltage, locate the fault, and provide fast restoration.</p>
Method of Mitigating Backlash of Mechanical Gear Systems Using a Damper Motor Michael "Mischa" Steurer 08-018 Robby Freeborn-Scott <p>The technology developed comprises a torque damper motor connected to the output side of a mechanical gear system. The damper motor, along with its associated control system, mitigates backlash problems, reduced torsional resonance, and provides improved output torque control. In the preferred embodiment, the damper motor is powered by a power electronics-based variable speed drive. The damper motor can be significantly less powerful than the overall rating of the gear system (typically 5-10% of the overall rating) while still providing the enhanced performance.</p> <p>The invention can be applied to any rotating system having a gear train. The invention eliminates or at least mitigates many of the problems inherent in rotating gear systems. As one example, the invention could be used with many types of torque creating devices other than steam turbines, electric motors, and compressors. Likewise, although a was described in detail, the invention is equally applicable to speed-decreasing gear trains as well as speed-increasing gear train.</p>
Leakage Current Suppression Solutions for Photovoltaic Cascaded Multilevel Inverter Application Hui (Helen) Li 13-176 Robby Freeborn-Scott <p>The cascaded multilevel inverter is considered to be a promising alternative for the low-cost and high-efficiency photovoltaic (PV) systems. However, the current leakage issue, resulting from the stray capacitances between the PV panels and the earth, needs to be solved for the cascaded inverter to be reliably applied in PV application.</p> <p>The proposed technologies solve the leakage current issue in PV cascaded multilevel inverter by using passive filters. It can retain the simple structure of the inverter and does not complicate the associated control system.  The system is a photovoltaic cascaded inverter, including inverter modules, which have both an AC and a DC side.  In addition, the system includes a common DC-side choke coupled to the DC-side of each of the inverter modules and a common mode AC-side choke coupled to the AC-side of each of the inverter modules.</p> <p> </p>
Methods for Implementing Stochastic Anti-Windup PI Controllers Emmanuel Collins 08-019 Robby Freeborn-Scott <p>In the present invention, different circuit-based implementations of stochastic anti-windup PI controllers are provided for a motor drive controller system. The designs can be implemented in a Field Programmable Gate Arrays (FPGA) device. The anti-windup PI controllers are implemented stochastically so as to enhance the computational capability of FPGA. The invention encompasses different circuit arrangements that implement distinct anti-windup algorithms for a digital PI speed controller. The anti-windup algorithms implemented by the circuit arrangements can significantly improve the control performance of variable-speed motor drives.</p> <p>Compared with the existing technologies, the stochastic PI controller provides an efficient implementation approach that uses straightforward digital logic circuits but has the advantage of significantly reducing the circuit complexity. Therefore, the present invention notably improves the performance of the stochastic PI controller and saves digital resources in a motor drive control system. The immediate and/or future applications are motor drive controllers for induction motor systems, and more particularly, proportional-integral (PI) controllers. The use of the invention will increase the market of FPGA since the capability will be largely increased and the cost will be relatively reduced.</p>
The Lookahead Instruction Fetch Engine (LIFE) Dr. David Whalley 08-033 Dr. Matthieu Dumont <p>The Lookahead Instruction Fetch Engine (LIFE) provides a mechanism to guarantee instruction fetch behavior in order to avoid access to fetch-associated structures, including the level one instruction cache (Ll IC), instruction translation look aside buffer (ITLB), branch predictor (BP), branch target buffer (BTB), and return address stack (RAS). Systems and methods may be provided for lookahead instruction fetching for processors. The systems and methods may include an L1 instruction cache, where the L1 instruction cache may include a plurality of lines of data, where each line of data may include one or more instructions. The systems and methods may also include a tagless hit instruction cache, where the tagless hit instruction cache may store a subset of the lines of data in the L1 instruction cache, where instructions in the lines of data stored in the tagless hit instruction cache may be stored with metadata indicative of whether a next instruction is guaranteed to reside in the tagless hit instruction cache, where an instruction fetcher may be arranged to have direct access to the L1 instruction cache and the tagless hit instruction cache, and where the tagless hit instruction cache may be arranged to have direct access to the L1 instruction cache.</p> <p>LIFE can both reduce energy consumption and power requirements with no or negligible impact on application execution times. It can be used to reduce energy consumption in embedded processors to extend battery life. It can be used to decrease power requirements of general purpose processors to help address heat issues. LIFE, unlike most energy saving features, does not come at the cost of increased execution time. It will result in a significant improvement over the state of the art and will extend the life of batteries making mobile computing more practical. Finally, it will allow general-purpose processors to run at a faster clock rate with similar heat being generated.</p>
System and Method of Probabilistic Passwords Cracking Dr. Sudhir Aggarwal 11-189 Dr. Matthieu Dumont <p>Professor Aggarwal and his team have created a  system and method of probabilistic passwords cracking.</p> <p>This technology is a novel password cracking system that generates password structures in highest probability order. Our program, called UnLock, automatically creates a probabilistic context-free grammar (CFG) based upon a training set of previously disclosed passwords.</p> <p>This CFG then allows to generate word-mangling rules, and from them, password guesses to be used in password cracking attacks.</p> <h2>Advantages:</h2> <ul> <li>Effectiveness demonstrated on real password sets</li> <li>Technology capable to crack 36% to 93% more passwords than John the Ripper, a publicly available standard password cracking program.</li> <li>Tested in digital forensic missions</li> </ul>
System and Methods for Analyzing and Modifying Passwords Dr. Sudhir Aggarwal 12-044 Dr. Matthieu Dumont <p>Professor Aggarwal's team developed a system for analyzing and modifying passwords in a manner that provides a user with a strong and usable/memorable password. The user would propose a password that has relevance and can be remembered. The invention would evaluate the password to ascertain its strength. The evaluation is based on a probabilistic password cracking system that is trained on sets of revealed passwords and that can generate password guesses in highest probability order. If the user's proposed password is strong enough, the proposed password is accepted. If the user's proposed password is not strong enough, the system will reject it. If the proposed password is rejected, the system modifies the password and suggests one or more stronger passwords. The modified passwords would have limited modifications to the proposed password. Thus, the user has a tested strong and memorable password.</p>
Sub-seasonal Forecasts of Winter Storms and Cold Air Outbreaks Dr. Ming Cai 16-090 Dr. Matthieu Dumont <p style="font-size: 18px;" class="font_8"> </p> <p class="lead">"Our technology is a dynamics-statistics hybrid model to forecast continental-scale cold air outbreaks 20-50 days in advance beyond the 2-week limit of predictability for weather."</p> <p style="font-size: 18px;" class="font_8"> </p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Professor Cai's team has developed a technology that allows them to make Sub-seasonal forecasts for cold air outbreaks in winter season. These forecasts are made on the basis of the relationship of the atmospheric mass circulation intensity and cold air outbreaks. The atmospheric poleward mass circulation aloft into the polar region, including the stratospheric component, is coupled with the equatorward mass circulation out of the polar region in the lower troposphere. The strengthening of the later is responsible for cold air outbreaks in mid-latitudes.</span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Due to the inherent predictability limit of 1-2 weeks for numerical weather forecasts, operational numerical weather forecast models no longer have useful forecast skill for weather forecasts beyond a lead time of about 10 days. Recently, the research carried out by Professor Cai and his team shows that operational numerical weather forecast models do possess useful skill for atmospheric anomalies over the polar stratosphere in cold seasons owing the models' ability to capture the poleward mass circulation into the polar stratosphere.</span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">They calculate the stratospheric mass transport into the polar region from forecast outputs of the US NOAA NCEP's operational CFSv2 model and use it as our forecasts for the strength of the atmospheric mass circulation. The anomalous strengthening of it is indicative of the high probability of occurrence of cold air outbreaks in mid-latitudes.They further derive a set of forecasted indices describing a state of stratospheric mass circulation to obtain detailed spatial pattern and intensity of the associated cold air outbreak events. </span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Because cold air outbreak events are accompanied with development of low and high pressure systems and frontal circulations, our forecasts of cold air outbreaks are also indicative of snow, frozen rain, high wind, icy/freezing and other winter storm related hazards besides a large area of below-normal cold temperatures.</span></p> <p><a href="">Forecast website:</a></p> <p><a href="">Professor Cai in the news</a></p> <p> </p>
Systems and Methods for Improving Processor Efficiency Dr. David Whalley 13-101 Dr. Matthieu Dumont <p>Dr. Whalley's team has created  a data cache systems designed to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.</p> <p>Furthermore,  tagless access buffers (TABs) can optimize energy efficiency in various computing systems. Candidate memory references in an L1 data cache may be identified and stored in the TAB. Various techniques may be implemented for identifying the candidate references and allocating the references into the TAB. Groups of memory references may also be allocate to a single TAB entry or may be allocated to an extra TAB entry (such that two lines in the TAB may be used to store L1 data cache lines), for example, when a strided access pattern spans two consecutive L1 data cache lines. Certain other embodiments are related to data filter cache and multi-issue tagless hit instruction cache (TH-IC) techniques.</p>
System and method for Generating a Benchmark Dataset for Real noise Reduction Evaluation Dr. Adrian Barbu 15-046 Dr. Matthieu Dumont <p>Often images taken with smartphones or point-and-shoot digital cameras come out noisy due to lack of sufficient lighting. This low-light noise problem is widespread, being present in all of the smartphones in the world, more than 1 billion total. This problem generates in consumers disappointment and frustration with the quality of the images taken in low light. While a number of commercial denoising packages are already available on the market, the majority of them are trained on images corrupted by artificial noise, rather than trained on real low-light noisy images. Since artificial noise has different characteristics than real noise, these packages do not perform as well as a denoising algorithm trained on images corrupted by real noise as we have created.</p> <p>                We have developed a fully automatic state of the art algorithm (RENOIR) for denoising smartphone and digital camera images which have low-light noise problems. The RENOIR algorithm could be either; be sold directly to the public as a standalone application or could be licensed to smartphone or digital camera manufacturers to be embedded in their devices.</p> <p> </p>
Intelligent Wi-Fi Packet Relay Protocol Dr. Zhenghao Zhang 13-089 Dr. Matthieu Dumont <p>L2Relay is a novel packet relay protocol for Wi-Fi networks that can improve the performance and extend the range of the network. A device running L2Relay is referred to as a relayer, which overhears the packet transmissions and retransmits a packet on behalf of the Access Point (AP) or the node if no acknowledgement is overheard. L2Relay is ubiquitously compatible with all Wi-Fi devices. L2Relay is designed to be a layer 2 solution that has direct control over many layer 2 functionalities such as carrier sense. Unique problems are solved in the design of L2Relay including link measurement, rate adaptation, and relayer selection. L2Relay was implemented in the OpenFWWF platform and compared against the baseline without a relayer as well as a commercial Wi-Fi range extender. The results show that L2Relay outperforms both compared schemes.</p>
The Spot Method for Detecting Compromised Computers in a Network Zhenhai Duan 09-148 Matthieu Dumont <p>Threats to computer network security are increasing, particularly regarding the “botnet” scenario where computers in a network are infected by malware programs (e.g., viruses, spyware, worms) that enable third parties to take control of the machines without the owners’ knowledge. Compromised computers, also known as “zombies,” can markedly decrease the efficiency of a network.</p> <p>Current malware detection programs are only capable of detecting known malware agents; however, new malware is continuously being developed so that malware detection programs are chronically behind and require frequent updates. Additionally, most detection methods do not allow for the global monitoring of machines on a network.</p> <p>Unlike the current malware detection programs that focus on the point of infection, Dr. Duan has developed a new program, SPOT, that focuses on the number of outgoing messages that are originated or forwarded by each computer on a network to identify the presence of compromised machines. SPOT uses three algorithms that were specifically developed for the system. The first algorithm is based on the percentage of spam messages that originate or are forwarded from an internal machine. The second is based on the number spam messages that originate or are forwarded from an internal machine. The third is based on a statistical method called the sequential probability ratio test (SPRT). Importantly, SPOT analyses the total number of messages sent by a machine rather than only analyzing the rate at which they are sent to thwart spammers from purposely slowing the rate of message transmission in order to work around the system. The SPOT system enables individual networks to globally monitor computers on their networks and to automatically and accurately detect and efficiently remove compromised computers from their networks in an online manner. This novel detection method is applicable to a wide range of settings in which computer networks play an essential role.</p> <p><a href="">Download PDF Version</a> </p> <h2><span>Applications:</span></h2> <ul> <li>Computer security industry.</li> <li>needing computer security (e.g., government agencies, financial institutions, research laboratories).</li> </ul> <h2>Advantages:</h2> <ul> <li>May be incorporated into new or added to existing networks at low cost.</li> <li>Only a single copy of the SPOT software is needed to protect a network.</li> <li>Fills security holes left by existing malware detection programs that focus only on the point of intrusion.</li> <li>May be used in combination with other malware detection programs.</li> <li>Detects compromised computers quickly and accurately, with low false positive and false negative rates.</li> </ul> password,spot,computers
Variable Height Interactive Kiosk 10 Dr. Helzer & Dr. Bowermeister 08-080 Brent Edington <p>The present invention describes a kiosk that is field installed and is capable of allowing a person to interact with a program, such as the Medicaid program, or communicate directly with a counselor for the program, so that the person may enroll, change health plans, review benefits, etc., within the program from a remote location.</p> <p>This kiosk allows a user to navigate through a Medicaid process via a touch screen monitor facilitated by an onscreen navigator employing ladder logic. The kiosk has a housing to which a bracket that holds the monitor and a keyboard is attached, the bracket capable of sliding up and down and tilting back and forth. A user initially identifies him or herself to the system via a card reader and confirms identity via a biometric reader. Once within the system, the user navigates via the monitor through the assistance of an onscreen navigator that is set up using ladder logic. At any point during the process, a counselor may be called via a telephone attached to the housing. Once all data is collected for a user, the internal processor transmits the data to a central server.</p> <p>The variable height interactive kiosk 10 allows the user's application to be submitted to the appropriate agency for processing in real time and with a high degree of certainty of receipt. Should a subcomponent of the overall application system fail, the variable height interactive kiosk 10 provides a level of redundancy so that such failures can be cured immediately upon identification of the failure, possibly without the user ever knowing of the failure.</p> <p> </p>
Simple, Accurate and Fast Web-Based Analysis Tool for the Stock Market Dr. Piyush Kumar 12-193 Matthieu Dumont <p>The present invention describes a novel system and method of aggregating and predicting stock rankings. A financial data model based on a neighborhood model, this invention allows users to predict the trend of a continuous time series, given the knowledge of other similar time series. It also solves another proximal problem of Rank Aggregation, which is, given a set of rankings based on some parameters, to come up with an optimal ranking that procures the earning capability of a ticker as the primary pivot. To achieve this, each ticker is projected as a point on a high dimensional space.</p> <p>The system and method then uses a ranking optimization method to predict the ranking of each stock based on percentage change in price. The current invention facilitates investors trading by using a novel methodology to predict stock rankings and providing a neighborhood of related stocks, while having an easy to use interface.</p> <h2>Advantages:</h2> <ul> <li>Ranks stock tickers registered at NASDAQ based on different market parameters, or within a given a sector, or as charts where instead of the price of a ticker its rank is shown at different hours of the day</li> <li>Predicts pricing trends</li> <li>Provides recommendation based on portfolio and budget, and short term prediction with reason (i.e. why we have put ticker X at rank 1)</li> <li>The entire web interface (including the visualizations) will be implemented using HTML5/CSS3 so that it stays accessible from any mobile device (including Apple devices)</li> </ul> <p> </p>
Automated Extraction of Bio-entity Relationships from Literature Dr. Zhang 12-065 Matthieu Dumont <p>The current invention discloses an automated and standardized software application, system, and method of extracting relationships, for example bio-entity relationships, in text or literature.</p> <p>The long-standing need for an improved, automated and more efficient text mining procedure is now met by a new and useful computer-implemented software application. The software is accessible from a non-transitory, computer-readable media and provides instructions for a computer processor to extract textual relationships or semantic information from non-annotated data by natural language processing and graph theoretic algorithm.</p> <h2>Advantages:</h2> <ul> <li>The present invention may address one or more of the problems (high incidence of error and high cost of text mining), and deficiencies (low efficiency and poor organization/standardized format) of the prior art.</li> </ul> <h2> Applications:</h2> <ul> <li>Building biomedical databases, search engines, knowledge bases, or any other applications that may use organized relationships of content within literatures.</li> </ul>
PetroOrg Software Yuri Corilo 13-093 Abby Queale <p>EXCLUSIVELY LICENSED</p> <p><span style="font-family: inherit; font-size: 0.875rem; line-height: 1.4;">Software specifically designed to process, assign, organize and visualize elemental composition of petroleum and its derivatives samples acquired by high resolution mass spectrometry.</span></p>
Central Executive Training for ADHD Dr. Michael Kofler 16-106 Dr. Matthieu Dumont <p>Attention-deficit/hyperactivity disorder (ADHD) is a complex, chronic, and potentially debilitating disorder of brain, behavior, and development that affects approximately 5.4% of school-aged children at an annual U.S. cost of illness of over $42 billion. Medication and behavioral treatment are effective for reducing symptoms, but they are considered maintenance therapies because their benefits disappear within minutes to hours after treatment is stopped. Clearly, novel treatments are needed.</p> <p> </p> <p>Central Executive Training (CET) is a novel, evidence-informed, computerized training protocol developed based on recent advancements in clinical and neuropsychological science. It differs fundamentally from existing, capacity-based “working memory training” programs. Each of CET’s 9 training games implement advanced algorithms to adapt based on the child’s performance and build capabilities across three, empirically-identified functions of the midlateral prefrontal cortex. These 3 functions involve dual-processing, continuous updating, and temporal ordering, and are collectively known as the brain’s ‘central executive.’</p> <p> </p> <p>Central executive abilities are targeted in CET based on fMRI evidence of significant cortical underdevelopment in these areas in children with ADHD. Importantly, our previous work has shown that hyperactivity and inattentive symptoms are most pronounced in children with ADHD when they are engaged in activities that challenge their underdeveloped central executive abilities. In fact, several studies have found that children with ADHD do not show attention deficits or hyperactivity during conditions with minimal central executive demands.</p> <p> </p> <p>Our preliminary data show that CET is superior to the current gold standard psychosocial treatment (behavioral parent training) for improving working memory in children with ADHD. Our data also show that CET is superior to the gold standard for reducing hyperactivity symptoms measured using high-precision actigraphs that sample children’s movement 16 times per second. CET was equivalent to the current gold standard for reducing ADHD symptoms based on parent report. A randomized clinical trial of CET is underway.</p> ADHD
Economic Analysis System Julie Harrington 15-197 Brent Edington <p><strong>Citrones</strong> and <strong>Citronem</strong>  products are software supported Multiplier Matrix files,  based on economic business establishment variables, using standard business sector classifications such as the North American Industry Classification System (NAICS), or Standard Industrial Classification System (SIC), per geographic region (e.g., Nation, State, County, Zip Code, Congressional District, etc.).</p> <p><strong>Citrones</strong> and <strong>Citronem</strong>  products enable users to conduct economic and business research, such as industrial operations(IO) research, industrial clusters analyses and economic impact analyses (including direct, indirect and induced impacts) and economic forecasting, among other areas. The products provide a level of inter-industry detail that currently is not available in the market. For example, the typical economic impact model software product generates about 1,056 multipliers. However, when compared with <strong>Citrones</strong> and <strong>Citronem</strong>, which generates approximately 1.35 million multipliers, the there is a much higher resolution or level of inter-industry detail at the six digit level. The granular level of readily applicable and accessible multipliers with allow for exponentially greater detail and higher quality results for any economic impact analyses. The Multiplier Matrix, is available in a user friendly format (e.g. EXCEL, SAS and SPSS software). <strong>Citrones </strong>and <strong>Citronem</strong> is based on the largest sample of national establishments’ data available, and will easily compete (with an expected market advantage) with any product in its category in the current marketplace.</p>