The question as to how sign processes can be meaningful to artificial agents has been a fundamental one for cognitive science throughout its history, from Turing’s (1950) argument from consciousness, to Searle’s (1980) Chinese room and Harnad’s (1990) symbol grounding problem. Currently, the question is even more pressing in the light of recent developments in AI robotics, specifically in the area of reactive and evolutionary approaches. One would perhaps expect that given the embodied and embedded nature of these systems, meaningful sign processes would emerge from the interactions between these robots and their environment. So far, however, robots seem to lack any sensitivity to the significance of signs. In this chapter we will suggest that the artificiality of the body of current robots precludes the emergence of meaning. In fact, one may question whether the label “embodied” genuinely applies to current robots. It may be more truthful to speak of robots being “physicalized,” given that the types of matter used in creating robots bears more similarity to machines like cars or airplanes than to organisms. Thus, we are driven to an investigation of how body and meaning relate. We suggest that meaning is closely related to the strengths and weaknesses of organic bodies of cognitive systems in relation to their struggle for survival. Specifically, as long as four essential characteristics of organic bodies (autopoiesis, metabolism, centrifugal development and self-organization) are lacking in artificial systems, there will be little possibility of the emergence of meaningful sign processes.