Current language models use nowadays an impressive amount of parameters, training data and computing ressources. Their properties are not well understood and they do not rely on a strong theoretical basis with respect to linguistics or cognitive sciences. The COMPO project aims to explictly introduce compositionality and memory limitation related biases in language models with the goal in the long term to propose models that are less data and resource intensive.