Duplicate Content
Duplicate content on your website is something that search engines have pushed back against, to the point where it is taken very seriously by SEOs. Duplicate content is problematic for search engines, in the sense that they don't want to serve duplicate results to searchers.
If you have 2 different URLs that contain identical content, search engines typically will only wish to serve one of these URLs as a search result for a given query. If you have 2 URLs that have different body content, but identical h1 and title tag, search engines may be confused as to which one is the most suitable option to include in search results for a given query.
This can result in these URLs effectively competing with each other for the same query, and potentially hurting each others ability to rank (this is known as keyword cannibalization).
Duplicate content at scale
Duplicate content is relatively commonplace, and on most sites it will not be causing big issues, simply due to the scale. If there are a few pages with duplicate title tags, then this certainly represents a missed opportunity, the size of which would be determined by the keywords these pages target. But a few pages is not going to cause you major problems.
Duplicate content really becomes an issue when it happens at scale. Thousands and thousands of duplicate pages, or duplicate content that occurs due to a systemic issue, could really hurt the ability of the website to perform well in organic search. The duplicate content would be filtered out by search engines, so the cumulative addressable market would become degrees of magnitude smaller.
While Google are adamant that there is no 'duplicate content penalty', it becomes a question of semantics - if thousands of pages no longer appear in search results, it may certainly feel like a penalty.
The type and scale of the duplicate content problem should inform how much of a priority it is to be addressed.