Your japanese phrases are quite easy for machine translation, as they have clear subjects stated and not much ambiguity.
The main problem is the implicit subject due to omission of pronoms in regular japanese, among other ambiguous things that can only be correctly translated in context. Also ommision of gender, plural, etc. The huge decoder-only models so far do a much better job grasping/guessing that and staying consistent compared to small encoder-decoder MTL models.
This simple phrase any beginner in japanese would understand: "山田さん、猫いますか?はい、います。" "Yamada-san, neko imasuka? Hai, imasu."
Google Translate now: Yamada-san, do you have a cat? Yes, I'm here.
When tested a few years ago: “Mr Yamada, does he have a cat? Yes, it has a cat”
LLama-3-8b-instruct: "Yamada-san, is there a cat? Ah, yes, there is."
GPT 3.5: "Does Yamada-san have a cat? Yes, he/she does."
gemini-1.5-pro-api-0409-preview: Does Mr./Ms. Yamada have a cat? Yes, they do.
LLama-3-70B-instruct: "Yamada-san, do you have a cat? Yes, I do."
GPT 4, Command-R+: "Mr. Yamada, do you have a cat? Yes, I do."
The two google translate ones are simply terrible. Technically, they are possible translations, but I would rate them as wrong.
All LLMs gave possible translations, but the larger the LLM, more likely to be correct the translation became. Well, the top two ones chose a gender, while the -san thing is a clever way to avoid chosing a gender, but not a very pure translation. If I had to give only one translation, I would pick either of those.
Also of note, while some even explained each element of their translation, none said that other possible translations were possible.