Trust perception manipulated by information source and the labeling of the source

As AI-generated health information proliferates online and becomes increasingly indistinguishable from human-sourced information, it becomes critical to understand how we trust and label such content, especially should such information be inaccurate. We conducted a mixed-methods survey (N=142) and within-subjects lab study (N=40) to investigate how health information source (Human, LLM), type (General, Symptom, Treatment), and disclosed label (Human, AI) influence perceived trust, behavior, and physiological indicators. We found that AI content is trusted more than human content, regardless of labeling, whereas human labels are trusted more than AI labels. Trust remained consistent across information types. Eye-tracking and physiological responses varied significantly by source and label, reaching 73% accuracy and 0.35 R$^2$ in predicting perceived trust, and 65% in classifying the source. We show that adding transparency labels to online health information modulates trust, where behavioral and physiological features may help verify trust perceptions and indicate if additional transparency is needed.