We introduce Parameter Tracking (PT), a generalization of Gradient Tracking (GT) for Federated Learning (FL), and propose two novel Adam-based FL algorithms, FAdamET and FAdamGT, that integrate PT for improved adaptive optimization. Theoretical analysis and experiments demonstrate that these methods enhance convergence efficiency, reducing both communication and computation costs across varying data heterogeneity levels.