• Login
    View Item 
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UA Campus RepositoryCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournal

    My Account

    LoginRegister

    About

    AboutUA Faculty PublicationsUA DissertationsUA Master's ThesesUA Honors ThesesUA PressUA YearbooksUA CatalogsUA Libraries

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Machine Learning for Efficient & Robust Next-Generation Communication Systems

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    azu_etd_22352_sip1_m.pdf
    Size:
    5.173Mb
    Format:
    PDF
    Download
    Author
    Teku, Noel
    Issue Date
    2025
    Advisor
    Tandon, Ravi
    
    Metadata
    Show full item record
    Publisher
    The University of Arizona.
    Rights
    Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
    Abstract
    Machine learning (ML)-based techniques are increasingly being incorporated into next-generation wireless systems: both for improving fundamental building blocks (e.g., modulation classification, power allocation, channel decoding) as well as enabling new functionalities (e.g., AR/VR, autonomous vehicles). This dissertation makes the following contributions in these areas: As ML classifiers become integral to next-generation wireless systems, it is essential to ensure their predictions are delivered both reliably and with low delay—for instance, in applications like transmitting road condition assessments in vehicular networks or relaying critical health data from sensors to medical providers. In our first contribution, we analyze the fundamental information-theoretic tradeoffs between latency and end-to-end distortion when communicating the results of a classifier over a noisy communication system. We use techniques from finite blocklength channel capacity and show that lattice-based quantization of probability distributions leads to a significant reduction in latency compared to other baselines. In our second contribution, we present a new approach for using reinforcement learning (RL) to provide adaptive robustness to High Frequency (HF) channels. The HF band, which occupies the spectrum of 3 to 30 MHz, enables long-range communications by bouncing signals off the ionosphere with limited communication infrastructure. However, the turbulent nature of the channel, which causes frequent signal dropouts, has deterred the band from being used more heavily. To mitigate this challenge, we propose using RL to learn the optimal settings (e.g., tap length, step size, filter type, adaptive algorithm) of an adaptive equalizer and show that our techniques can provide better performance compared to adaptive equalizers with a fixed structure. In our third contribution, we devise an unsupervised learning-based framework to optimize cell-free networks (CFNs). CFNs deviate from the concept of having an access point (AP) be responsible for serving user equipment (UEs) within a fixed radius and instead deploy APs over a geographic region to collaboratively serve every UE [6]. In doing so, CFNs increase the probability of coverage and achieve stronger diversity gains [7]. To build on these improvements, we propose using an unsupervised neural network to learn how to split a UE’s message across different APs in a manner that minimizes the total latency of the CFN. We show that our unsupervised technique is more effective in ensuring higher probabilities of lower latencies compared to decentralized baselines. Additionally, when noisy channel state information is assumed, our unsupervised technique is more robust in achieving a high likelihood of lower latencies compared to centralized baselines. In our final contribution, we investigate a complementary problem of ensuring privacy when aligning Large Language Models (LLMs). LLMs have been investigated for various applications, due to their broad knowledge base attained via pre-training on large corpora of data. However, it has been shown that LLMs can generate socially unacceptable responses. Alignment procedures have been proposed to train LLMs, using preference data collected from humans, to reinforce which types of responses are socially acceptable. While such methods are effective in regulating an LLM's responses, this type of training could be susceptible to leaking privacy-sensitive information of the human labelers. To mitigate this, we study the problem of LLM alignment with labeler privacy while maintaining the utility of the alignment process. To accomplish this, we present a novel privacy-preserving approach, namely PROPS (PROgressively Private Self-Alignment), a multi-stage algorithm capable of ensuring preference privacy without causing a significant drop in the utility of an LLM as it undergoes alignment.
    Type
    text
    Electronic Dissertation
    Degree Name
    Ph.D.
    Degree Level
    doctoral
    Degree Program
    Graduate College
    Electrical & Computer Engineering
    Degree Grantor
    University of Arizona
    Collections
    Dissertations

    entitlement

     
    The University of Arizona Libraries | 1510 E. University Blvd. | Tucson, AZ 85721-0055
    Tel 520-621-6442 | repository@u.library.arizona.edu
    DSpace software copyright © 2002-2017  DuraSpace
    Quick Guide | Contact Us | Send Feedback
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.