Diluted Dante wrote:Even if we take that on face value, the language in question is 'god damn'. Americans are fucking insane.
GurtTractor wrote:Yes your first paragraph is exactly part of what I've been saying, and it directly relates to all the factors you mention in the second. I'd appreciate it if someone would actually engage with the premise as it's just a wee bit important and interesting to think about..
Is there a good method for determining what qualifies as harmful misinformation/disinformation, either from a media organisation or something you can think of yourself? And could such a method be practically usable to responsibly police online content?
poprock wrote:I dunno. I think your original point, based on your quote from the Substack team, was that in the presence of more and more opinion-based reporting trust in the traditionally ethical ‘fourth estate’ is eroded. Basically you’re acknowledging that people get more cynical when exposed to more nonsense.
In a pernicious cycle, these dynamics in turn give each group license to point to the excesses of the other as further justification for mistrust and misbehavior. It’s always the other side who is deranged and dishonest and dangerous. It’s the other side who shuts down criticism because they know they can’t win the argument. It’s they who have no concern for the truth. Them, them, them; not us, us, us. Through this pattern, each group becomes ever more incensed by the misdeeds of the other and blind to their own. The center does not hold.
In the online Thunderdome, it is imperative that you are not seen to engage with ideas from the wrong group; on the contrary, you are expected to marshall whatever power is at your disposal – be it cultural, political, or technological – to silence their arguments.
GurtTractor wrote:"Mostly worked fine"... Yoss those words are doing some heavy ass lifting.
Even if I accepted your baseless premise that it's all been hunky fucking dory and worked very well I still submit that we should at least hope for improvement, that would be progress i.e. making the world a better place.
BTW what are these solutions? You still haven't actually laid out any kind of method, previously existing or otherwise. 'Editors just figure it out'?
Also polarisation is ever increasing, modern internet communication and publication is playing a large part in that. Editors are also people, and will also be subject to the effects of that polarisation unless they apply some rigourous methodology. So again I'm asking, because I think you might actually know; how do editors determine when something is safe to be published?
GurtTractor wrote:Any attempt to adequately control an acceptable narrative is inevitably going to cause harm to reasonable exchange of ideas at some point
GurtTractor wrote:Are you following now pop?
Diluted Dante wrote:Just ask Joe Rogan about it, and if he agrees with it it's misinformation.
Pilate wrote:What is truth?
GurtTractor wrote:If you think I'm taking myself very seriously then I think you've severely misunderstood my posts. It's the ideas and events that I take seriously because I think they genuinely matter. I can understand why you might think this though, and I think it partly comes down to the last bit of your paragraph here which I mostly agree with -I think the key lesson here is "don't take yourself too, or even at all, seriously", especially if you claim to espouse doubt and lack of expertise informing your opinions, which you evangelically do above.I won't go into detail here but it's pretty evident that our environment and education literally changes the structure of the hardware/software of our brains, just learning a single concept can lead to a tremendous shift in worldview, effectively countering hundreds of years of cultural norms. Similarly learning about how we think, cognitive biases, priors shaping perception etc, has huge benefits for media literacy. This idea that we are entirely stuck with the capabilities and limitations of this monkey brain is big lie in my opinion. Anyway I won't say any more about this here, though I think it is fairly pertinent. Happy to talk about it in another thread.Isn't the complaint rather that Twitter is designed to be engaging in a shitty way, influencing people in nefarious ways, but that of course people can use it critically and obtain valuable information from it, but it requires rather more care and oversight than, say, a well-sourced trusted analysis from expert or on-the-ground knowledge? That it pushes confirmation bias and outrage generation at every turn but again you can try and critically engage with that and try and filter it out, but that our brains are basically wired up to respond to this, and so it remains a harmful (overall) thing? I mean it doesn't seem very difficult to hold two ideas: that twitter, along with other tools, can provide useful links and information that can circumvent official sources quickly, but that that info can come with serious verification / truth-rating risks, and that the timeline algorithms are deliberately and expertly designed to maintain engagement and the cheapest and easiest and fastest way to do that is abusing the way our brains work - why anyone would get defensive about this is bizarre, and why anyone would think that "generational educational tools are needed to re-wire the evolved human brain" is an alternative think piece to that analysis is equally so
LivDiv wrote:The word 'Steelmanning' has infuriated me.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
djchump wrote:One of the best bots ever made: https://twitter.com/PayGapApp
Yossarian wrote:Gender pay gaps only need to be published if you’ve got more than 250 people in an organisation, at which point you likely will have a fair few people in similar roles.
It looks like you're new here. If you want to get involved, click one of these buttons!