Computer vision is increasingly used in farmers’ fields and agricultural experiments to quantify important traits. Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of plant features, including size, shape, and colour. Although today’s AI-driven foundation models segment almost any object in an image, they still fail for complex plant canopies. To improve model performance, the global wheat dataset consortium assembled a diverse set of images from experiments around the globe. After the head detection dataset (GWHD), the new dataset targets a full semantic segmentation (GWFSS) of organs (leaves, stems and spikes) covering all developmental stages. Images were collected by 11 institutions using a wide range of imaging setups. Two datasets are provided: i) a set of 1096 diverse images in which all organs were labelled at the pixel level, and (ii) a dataset of 52,078 images without annotations available for additional training. The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer. Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca. 90 %. However, the precision for stems with 54 % was rather lower. The major advantages over published models are: i) the exclusion of weeds from the wheat canopy, ii) the detection of all wheat features including necrotic and senescent tissues and its separation from crop residues. This facilitates further development in classifying healthy vs. unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.