It's kind of a neutral thing to say be honest. This advice is usually good to give young people or people who are legitimately in a situation where it will get better like grief of a recent loss that you have mutually experienced but to a lot of people this comes across as minimizing their pain, offering an ultimately empty platitude that doesn't co-relate to their situation or as naive because sometimes things really won't get better. Sometimes you really need to find a way to survive knowing you do so under permanent duress and that requires support rather than generic advice to grit your teeth.
Therapy is a great thing to recommend but anecdotes of how one person got through depressive episodes by sheer force of will and by doing something that can seem a monumental task depending on circumstances can actually make someone feel worse.
Not saying you should feel bad here. This is a very common thing people say particularly due to mental health campaigns targeted at teens but if you are looking for overwhelming positive feedback this might not be the way.
Using AI therepy providers really isn't recommended! There's no accountability built in for AI therapy chatbots and their efficacy when placed under professional review has been really not great. These models may seem like they are dispensing hard truths - because humans are often primed to not believe more optimistic or gentle takes thinking them to be explicitly flattering and thus false. Runaway negativity feels true but it can lead you to embrace unhealthy attitudes towards your self and others. AI runs with the assumptions you go in with in part because these models are designed from an engagement first perspective. They will do whatever keeps you on the hook whether or not it is actually good for you. You might think you are getting quality care but unless you are a trained professional you are not actually equipped to know if the help you are getting is of good quality only that it feels validating to you. If it errs there is no consequences to the provider like professionals who have a code of ethics and licencing boards that can conduct investigation for bad practices.
Once AI discovers whatever you report back to it you think is correct it will continue to use that tactic. Essentially it is tricking you into being your own unqualified therepist.
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
https://www.scientificamerican.com/article/why-ai-therapy-can-be-so-dangerous/