Show simple item record

dc.contributor.advisorJun, Kwang-Sung
dc.contributor.authorZhao, Yao
dc.creatorZhao, Yao
dc.date.accessioned2025-11-26T20:31:52Z
dc.date.available2025-11-26T20:31:52Z
dc.date.issued2025
dc.identifier.citationZhao, Yao. (2025). Algorithms and Theory of Bandits in Practical Setups (Doctoral dissertation, University of Arizona, Tucson, USA).
dc.identifier.urihttp://hdl.handle.net/10150/679087
dc.description.abstractMulti-armed bandit is a central paradigm for interactive machine learning, where an agent must learn efficiently from adaptive feedback. From a data collection point of view, bandit algorithms can be applied as a tool for efficient sequential data collection and serve as a powerful method to solve many real-world problems. This perspective leads to the pure exploration problem, where the goal is to focus entirely on data exploration during the learning process and use the collected data for a final task, such as decision-making or model training. A key aspect of this framework is that individual data collection steps do not have an immediate impact on a final objective. This distinguishes the pure exploration framework, which has recently shown its strength in many practical problems, from the more traditional regret minimization framework with a cumulative objective. In this dissertation, three problems are studied around this theme, ranging from theoretical foundations to practical applications. The first problem addresses a gap in the theoretical understanding of simple regret, a fundamental yet under-explored performance measure in pure exploration; since its proposal, a rigorous, instance-dependent characterization has been lacking. The second problem arises in experimental settings, such as clinical trials, where bandits are a powerful tool for accelerating discovery. However, factors like user non-compliance can confound the experimental results. We address the question of how to design efficient pure exploration algorithms to support critical decision-making in the presence of such confounding factors. The third problem concerns the alignment of Large Language Models (LLMs), which often requires a high-quality preference dataset that is expensive to collect. We study how to design efficient data exploration strategies to improve the sample efficiency of preference learning. These three problems share the common theme of efficient data collection for a final task, where the immediate impact of each data collection step is disregarded in favor of the terminal objective. This dissertation tackles these three problems by developing a suite of novel algorithms and analyses to address each of them. A common goal across all contributions is the provision of an instance-dependent guarantee for each problem. This is more practical and fine-grained than the worst-case bounds commonly seen in the literature, providing a more complete, problem instance level characterization that reflects the intrinsic difficulty of each specific task.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.titleAlgorithms and Theory of Bandits in Practical Setups
dc.typetext
dc.typeElectronic Dissertation
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberJun, Kwang-Sung
dc.contributor.committeememberPacheco, Jason
dc.contributor.committeememberZhang, Chicheng
dc.contributor.committeememberLi, Ming
thesis.degree.disciplineGraduate College
thesis.degree.disciplineComputer Science
thesis.degree.namePh.D.
refterms.dateFOA2025-11-26T20:31:52Z


Files in this item

Thumbnail
Name:
azu_etd_22615_sip1_m.pdf
Size:
2.030Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record