Skip to main content

Big Data

Name Investigator Tech ID Licensing Manager Name Micensing Manager Email Description Tags 1-Clik License
Intelligent Wi-Fi Packet Relay Protocol Dr. Zhenghao Zhang 13-089 Michael Tentnowski mtentnowski@fsu.edu <p>L2Relay is a novel packet relay protocol for Wi-Fi networks that can improve the performance and extend the range of the network. A device running L2Relay is referred to as a relayer, which overhears the packet transmissions and retransmits a packet on behalf of the Access Point (AP) or the node if no acknowledgement is overheard. L2Relay is ubiquitously compatible with all Wi-Fi devices. L2Relay is designed to be a layer 2 solution that has direct control over many layer 2 functionalities such as carrier sense. Unique problems are solved in the design of L2Relay including link measurement, rate adaptation, and relayer selection. L2Relay was implemented in the OpenFWWF platform and compared against the baseline without a relayer as well as a commercial Wi-Fi range extender. The results show that L2Relay outperforms both compared schemes.</p>
System and Method of Probabilistic Passwords Cracking Dr. Sudhir Aggarwal 11-189 Michael Tentnowski mtentnowski@fsu.edu <p>Professor Aggarwal and his team have created a  system and method of probabilistic passwords cracking.</p> <p>This technology is a novel password cracking system that generates password structures in highest probability order. Our program, called UnLock, automatically creates a probabilistic context-free grammar (CFG) based upon a training set of previously disclosed passwords.</p> <p>This CFG then allows to generate word-mangling rules, and from them, password guesses to be used in password cracking attacks.</p> <h2>Advantages:</h2> <ul> <li>Effectiveness demonstrated on real password sets</li> <li>Technology capable of cracking significantly more passwords in the same number of guesses as compared to publicly available standard password cracking systems.</li> <li>Tested in digital forensic missions</li> </ul>
System and Methods for Analyzing and Modifying Passwords Dr. Sudhir Aggarwal 12-044 Michael Tentnowski mtentnowski@fsu.edu <p>Professor Aggarwal's team developed a system for analyzing and modifying passwords in a manner that provides a user with a strong and usable/memorable password. The user would propose a password that has relevance and can be remembered. The invention would evaluate the password to ascertain its strength. The evaluation is based on a probabilistic password cracking system that is trained on sets of revealed passwords and that can generate password guesses in highest probability order. If the user's proposed password is strong enough, the proposed password is accepted. If the user's proposed password is not strong enough, the system will reject it. If the proposed password is rejected, the system modifies the password and suggests one or more stronger passwords. The modified passwords would have limited modifications to the proposed password. Thus, the user has a tested strong and memorable password.</p>
Fast Dynamic Parallel Approximate Neighbor Search Data Structure Using Space Filling Curves Piyush Kumar 16-096 Michael Tentnowski mtentnowski@fsu.edu <p>The nearest neighbor search (NNS) is a technique that is used in computing to optimize the amount of time it takes to accurately locate one data point in relation to another data point in a dataset that is organized so that distances between points are measured in Euclidian space. Increasingly, NNS computation is becoming a key sub-task in many algorithms and applications that are used to process, organize, cluster, learn, and understand massive data sets, such as those used in the automotive, aerospace, and geographic information system (GIS) industries.</p> <h2>The Problem:</h2> <p>The NNS algorithm works well for small data sets, but it is too time-consuming to implement with large data sets. The approximate nearest neighbor (ANN) search, an alternative to NNS, improves search time and saves memory by estimating the nearest neighbor, without guaranteeing that the actual nearest neighbor will be returned in every case. Two limitations of this method are that it is difficult to make an ANN algorithm dynamic (i.e., allows for insertions and deletions in the data structure) or to parallelize it (i.e., use multiple processors to speed up queries).</p> <h2>The Solution:</h2> <p>Dr. Kumar and his research team are developing a novel, practical, and theoretically-sound method that will solve the NNS problem in lower dimensional spaces. Specifically, the researchers are creating an approximate k-nearest neighbor algorithm, based on Morton Sorting of points, to create a software library for approximate nearest neighbor searches for Euclidian spaces. The library will use multi-core machines efficiently (parallel) and enable the insertion and deletion of points at run time (dynamic). This new algorithm delivers the search results with expected logarithmic query times that are competitive with or exceed Mount’s approximate nearest neighbor (ANN) search.</p> <h2>Advantages:</h2> <ul> <li>Speed on multicore machines</li> <li><span>Minimum spanning tree computation</span></li> </ul>
Materials Genome Software to Accelerate Discovery of New Materials Jose Mendoza-Cortes 18-012 Garrett Edmunds gedmunds@fsu.edu <p>The creation of a material genome can accelerate the discovery of new materials in much the same way the human genome is accelerating advances in gene therapy. It often takes 15-20 years to transfer advanced materials from the laboratory to the marketplace. Our predictive software utilizes unique databases of predicted materials to drastically accelerate the discovery of new materials by allowing users in research and industry to synthesize and characterize only the most promising compounds for the desired application in lieu of experimental trial and error on thousands candidates or even more. Genomes and predictive algorithms for energy storage and light capture materials have been developed. This technology is primed to be commercialized as a software as a service (SaaS).</p>
Systems and Methods for Improving Processor Efficiency Dr. David Whalley 13-101 Michael Tentnowski mtentnowski@fsu.edu <p>Dr. Whalley's team has created  a data cache systems designed to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.</p> <p>Furthermore,  tagless access buffers (TABs) can optimize energy efficiency in various computing systems. Candidate memory references in an L1 data cache may be identified and stored in the TAB. Various techniques may be implemented for identifying the candidate references and allocating the references into the TAB. Groups of memory references may also be allocate to a single TAB entry or may be allocated to an extra TAB entry (such that two lines in the TAB may be used to store L1 data cache lines), for example, when a strided access pattern spans two consecutive L1 data cache lines. Certain other embodiments are related to data filter cache and multi-issue tagless hit instruction cache (TH-IC) techniques.</p>
Sub-seasonal Forecasts of Winter Storms and Cold Air Outbreaks Dr. Ming Cai 16-090 Michael Tentnowski mtentnowski@fsu.edu <p style="font-size: 18px;" class="font_8"> </p> <p class="lead">"Our technology is a dynamics-statistics hybrid model to forecast continental-scale cold air outbreaks 20-50 days in advance beyond the 2-week limit of predictability for weather."</p> <p style="font-size: 18px;" class="font_8"> </p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Professor Cai's team has developed a technology that allows them to make Sub-seasonal forecasts for cold air outbreaks in winter season. These forecasts are made on the basis of the relationship of the atmospheric mass circulation intensity and cold air outbreaks. The atmospheric poleward mass circulation aloft into the polar region, including the stratospheric component, is coupled with the equatorward mass circulation out of the polar region in the lower troposphere. The strengthening of the later is responsible for cold air outbreaks in mid-latitudes.</span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Due to the inherent predictability limit of 1-2 weeks for numerical weather forecasts, operational numerical weather forecast models no longer have useful forecast skill for weather forecasts beyond a lead time of about 10 days. Recently, the research carried out by Professor Cai and his team shows that operational numerical weather forecast models do possess useful skill for atmospheric anomalies over the polar stratosphere in cold seasons owing the models' ability to capture the poleward mass circulation into the polar stratosphere.</span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">They calculate the stratospheric mass transport into the polar region from forecast outputs of the US NOAA NCEP's operational CFSv2 model and use it as our forecasts for the strength of the atmospheric mass circulation. The anomalous strengthening of it is indicative of the high probability of occurrence of cold air outbreaks in mid-latitudes.They further derive a set of forecasted indices describing a state of stratospheric mass circulation to obtain detailed spatial pattern and intensity of the associated cold air outbreak events. </span></p> <p style="font-size: 18px;" class="font_8"><span style="font-size: 18px;">Because cold air outbreak events are accompanied with development of low and high pressure systems and frontal circulations, our forecasts of cold air outbreaks are also indicative of snow, frozen rain, high wind, icy/freezing and other winter storm related hazards besides a large area of below-normal cold temperatures.</span></p> <p><a href="http://www.amccao.com/">Forecast website: http://www.amccao.com/</a></p> <p><a href="https://weather.com/news/weather/news/snow-siberia-russia-united-states-cold">Professor Cai in the news</a></p> <p> </p>
System and method for Generating a Benchmark Dataset for Real noise Reduction Evaluation Dr. Adrian Barbu 15-046 Garrett Edmunds gedmunds@fsu.edu <p>Often images taken with smartphones or point-and-shoot digital cameras come out noisy due to lack of sufficient lighting. This low-light noise problem is widespread, being present in all of the smartphones in the world, more than 1 billion total. This problem generates in consumers disappointment and frustration with the quality of the images taken in low light. While a number of commercial denoising packages are already available on the market, the majority of them are trained on images corrupted by artificial noise, rather than trained on real low-light noisy images. Since artificial noise has different characteristics than real noise, these packages do not perform as well as a denoising algorithm trained on images corrupted by real noise as we have created.</p> <p>                We have developed a fully automatic state of the art algorithm (RENOIR) for denoising smartphone and digital camera images which have low-light noise problems. The RENOIR algorithm could be either; be sold directly to the public as a standalone application or could be licensed to smartphone or digital camera manufacturers to be embedded in their devices.</p> <p> </p>
The Lookahead Instruction Fetch Engine (LIFE) Dr. David Whalley 08-033 Michael Tentnowski mtentnowski@fsu.edu <p>The Lookahead Instruction Fetch Engine (LIFE) provides a mechanism to guarantee instruction fetch behavior in order to avoid access to fetch-associated structures, including the level one instruction cache (Ll IC), instruction translation look aside buffer (ITLB), branch predictor (BP), branch target buffer (BTB), and return address stack (RAS). Systems and methods may be provided for lookahead instruction fetching for processors. The systems and methods may include an L1 instruction cache, where the L1 instruction cache may include a plurality of lines of data, where each line of data may include one or more instructions. The systems and methods may also include a tagless hit instruction cache, where the tagless hit instruction cache may store a subset of the lines of data in the L1 instruction cache, where instructions in the lines of data stored in the tagless hit instruction cache may be stored with metadata indicative of whether a next instruction is guaranteed to reside in the tagless hit instruction cache, where an instruction fetcher may be arranged to have direct access to the L1 instruction cache and the tagless hit instruction cache, and where the tagless hit instruction cache may be arranged to have direct access to the L1 instruction cache.</p> <p>LIFE can both reduce energy consumption and power requirements with no or negligible impact on application execution times. It can be used to reduce energy consumption in embedded processors to extend battery life. It can be used to decrease power requirements of general purpose processors to help address heat issues. LIFE, unlike most energy saving features, does not come at the cost of increased execution time. It will result in a significant improvement over the state of the art and will extend the life of batteries making mobile computing more practical. Finally, it will allow general-purpose processors to run at a faster clock rate with similar heat being generated.</p>
Method for Real-Time Probabilistic Inference with Bayesian Network on GPGPU Devices Robert Van Engelen 17-013 Michael Tentnowski mtentnowski@fsu.edu <p>General Purpose Graphics Processing Unit (GPGPU) devices are used in most PCs for graphics, popular for high-performance computing, and relatively inexpensive. However, algorithms must be specifically designed for these devices.</p> <p>The proposed invention consists of a process and a method for efficient probabilistic inference with Bayesian probabilistic networks on GPGPU devices. Bayesian probabilistic networks are widely used for modeling probability beliefs in computational biology and bioinformatics, healthcare, document classification, information retrieval, data fusion, decision support systems, security and law enforcement, betting/gaming and risk analysis.</p> <p> The invention consists of:</p> <ol> <li>A novel “parallel irregular wavefront process” for importance sampling with Bayesian probabilistic networks, such that this process is tailored to the specific FPFPU device being used</li> <li>A novel method to structure the Bayesian probabilistic network in GPGPU local memories to ensure optimal data access.</li> </ol> <p>This invention increases the efficiency of probabilistic inference with Bayesian probabilistic networks on GPCPU devices. This is achieved by the specialized organization of data in the memory of these devices and by the optimized parallel process to produce results faster. The efficiency and performance increases commensurate with increasingly larger Bayesian probabilistic networks, i.e. the approach scales favorably with larger networks thereby making real-time probabilistic inference possible on large data sets and realistic applications.  </p>