Elephant0991

joined 2 years ago
[–] Elephant0991@lemmy.bleh.au 2 points 2 years ago

Welcome to the 105: AI keyboard!

[–] Elephant0991@lemmy.bleh.au 12 points 2 years ago (1 children)

And definitely not standing at the back end!

[–] Elephant0991@lemmy.bleh.au 2 points 2 years ago

A paramour came to a shitty end...

[–] Elephant0991@lemmy.bleh.au 9 points 2 years ago

There began a night, wet and dreary...

Cool computers!

 

Summary:

The article discusses the phenomenon of microchimerism, where cells from a developing fetus can integrate into the mother's body and persist for years, potentially influencing various aspects of health. This bidirectional transfer of cells between mother and fetus during pregnancy is suggested to occur in various organs, such as the heart, lungs, breast, colon, kidney, liver, and brain. These cells, referred to as microchimeric cells, are genetically distinct entities that may play a role in immune system development, organ acceptance in transplantation, and even influencing behavior.

Researchers propose that microchimeric cells might impact susceptibility to diseases, pregnancy success, and overall health. Studies in mice suggest that these cells acquired during gestation could fine-tune the immune system and contribute to successful pregnancies. The article explores potential benefits and drawbacks of microchimerism, including its role in autoimmune diseases, organ acceptance in transplantation, and pregnancy complications.

Despite the widespread presence of microchimeric cells in individuals, many aspects of their function remain unclear, leading to debates among researchers. Some scientists believe that these cells may be influential architects of human life, potentially holding therapeutic implications for conditions like autoimmune diseases and high-risk pregnancies. However, challenges in studying microchimerism, including their rarity and hidden locations within the body, contribute to the ongoing controversy and uncertainty surrounding their significance.

 

Summary:

The author reflects on the challenges of memory and highlights a forgotten but valuable feature of Google Assistant on Android. The feature, called "Open memory," serves as a hub for Assistant's cross-platform information-storing system. Users can ask Google Assistant to remember specific information, and the "Open memory" command allows them to access a comprehensive list of everything stored, making it a useful tool for recalling details from any device connected to Google Assistant. The article emphasizes the potential of this feature for aiding memory and suggests incorporating it into daily habits for better recall.

[–] Elephant0991@lemmy.bleh.au 12 points 2 years ago

Yes! There is this Buddhist saying, supposedly some 2,500 years back, "Even if a whole mountain were made of gold, not double that would be enough to satisfy one person."

You can trace unsatisfied greed in American gazillionaires all the way back to Rockefeller. Before that, you can trace it to Kings, Queens, Emperors, Conquerors. Only external circumstances, societal structures/cultures/etc, keep the greed in check. As soon as we were out of subsistence living, we started collecting, often times just for the sake of collections, sometimes other people's great misery be damned.

[–] Elephant0991@lemmy.bleh.au 70 points 2 years ago (2 children)
[–] Elephant0991@lemmy.bleh.au 6 points 2 years ago

You figure out where you want to go. Plan how to get there, and then do things in the present to get there. Don't get stressed out how things turn out; you can only really have some notion of control of what you are doing. If the current plan doesn't work, change it, and keep doing it, until you get there, or not.

[–] Elephant0991@lemmy.bleh.au 3 points 2 years ago

2FAS, Bitwarden, Firefox are my most used FOSS on Android for me.

[–] Elephant0991@lemmy.bleh.au 1 points 2 years ago

That seems totally workable. Spin it, and you have artificial gravity. You can be in fact riding a Spinning Space Dick.

 

Full Story

https://www.goodnewsnetwork.org/christine-crosss-dream-of-petting-a-penguin-is-fulfilled-for-christmas/

Summary

An elderly woman named Christine Cross has always been a huge fan of penguins. For Christmas, her daughter Lindsay fulfilled Christine's lifelong dream of petting a real penguin at SeaWorld San Diego. Christine was so overcome with emotion that she cried tears of joy.

Christine has always felt a connection to penguins because they are clumsy on land but graceful in the water, just like her. She collects anything penguin-themed and has sponsored penguins in zoos for years.

Lindsay said that when she told Christine about the present, "she didn't say any words. It was more like an excited noise." After the experience, Christine couldn't stop saying thank you.

[–] Elephant0991@lemmy.bleh.au 17 points 2 years ago (2 children)

Cute dogs allowed!

[–] Elephant0991@lemmy.bleh.au 19 points 2 years ago (1 children)

That's like, yenocide.

[–] Elephant0991@lemmy.bleh.au 12 points 2 years ago (2 children)
 

These two news articles being consecutive is hilarious.

 

Summary

A new sextortion scam is circulating, impersonating YouPorn. Victims receive an email claiming that a sexually explicit video of them has been uploaded to the site and must pay to have it removed. In the past, similar scams threatened to share explicit content with contacts unless a ransom was paid, generating substantial profits. This recent scam claims to be from YouPorn, offering a free removal link that leads to nothing and lists paid options ranging from $199 to $1,399. Victims are urged to pay via Bitcoin. Thankfully, this campaign has not been successful, but it's important to remember that these emails are scams. If you receive such an email, delete it; there is no actual video, and making payments is not advisable.

 

TLDR; if your Linkedin account is taken over, the only way that you can get rid of the hacker may be to get support to do it.

Summary

  • A LinkedIn user, Pearce, received an SMS text from LinkedIn telling him to reset his password. He checked his account and found that there was an unknown IP address in Texas logged into his account.

  • Pearce changed his password and enabled multi-factor authentication, but the unwanted active session remained.

  • He opened a support ticket with LinkedIn, but it took them 3-4 days to reply. LinkedIn eventually signed Pearce out of all sessions and sent him a password reset link.

  • LinkedIn explained that Pearce's account may have been compromised if he had recently signed in from a public computer, used an outdated email or phone number, or used the same password on multiple websites.

  • LinkedIn recommends that users check their email addresses and enable two-factor authentication to protect their accounts.

  • The article also mentions that LinkedIn has added an option to end individual sessions, but it doesn't always work as advertised.

Additional Details

  • The unwanted active session could not be removed by changing the password or enabling multi-factor authentication.

  • LinkedIn Support was overwhelmed by the number of requests they received about this campaign, so it took them a long time to reply to Pearce's ticket.

 

Summary

Scout, a stray dog with a mysterious past and signs of abuse, escaped from an animal shelter and repeatedly found his way into a nursing home called Meadow Brook Medical Care Facility. After several visits, the nursing home staff adopted him as their own pet, bringing joy to the residents. Despite his troubled history, Scout displayed a strong bond with the elderly residents, comforting and protecting them. He became an integral part of their extended family, offering companionship and a sense of security.

 

by long10000

Other views:

by long10000

by 杨志强Zhiqiang

by long10000

by long10000

by David290

by long10000

 

Short Summary

The macOS app called NightOwl, originally designed to provide a night mode feature for Macs, has turned into a malicious tool that collects users' data and operates as part of a botnet. Originally well-regarded for its utility, NightOwl was bought by another company, and a recent update introduced hidden functionalities that redirected users' data through a network of affected computers. Web developer Taylor Robinson discovered that the app was running a local HTTP proxy without users' knowledge or consent, collecting users' IP addresses and sending the data to third parties. The app's certificate has been revoked, and it is no longer accessible. The incident highlights the risks associated with third-party apps that may have malicious intentions after updates or ownership changes.

Longer Summary

The NightOwl app was developed by Keeping Tempo, an LLC that went inactive earlier this year. The app was recently found to have been turned into a botnet by the new owners, TPE-FYI, LLC. The original developer, Michael Kramser, claims that he was unaware of the changes to the app and that he sold the company last year due to time constraints.

Gizmodo was unable to reach TPE-FYI, LLC for comment. However, the internet sleuth who discovered the botnet, Will Robinson, said that it is not uncommon for shady companies to buy apps and then monetize them by integrating third-party SDKs that harvest user data.

Robinson also said that it is understandable why developers might sell their apps, even if it means sacrificing their morals. App development is both hard and expensive, and for individual creators, it can be tempting to take the money and run.

This is not the first time that a popular app has been turned into a botnet. In 2013, the Brightest Flashlight app was sued by the Federal Trade Commission after allegedly transmitting users' location data and device info to third parties. The developer eventually settled with the FTC for an undisclosed amount.

In 2017, software developers discovered that the Stylish browser extension started recording all of its users' website visits after the app was bought by SimilarWeb. Another extension, The Great Suspender, was flagged as malware after it was sold to an unknown group back in 2020.

All of these apps had millions of users before anyone recognized the signs of intrusion. In these cases, the new app owners' shady efforts were all to support a more-intrusive version of harvesting data, which can be sold to third parties for an effort-free, morals-free payday.

Possible Takeaways

  • Minimize the software you use

  • Keep track of ownership changes

  • Use software from only the most reputable sources

  • Regularly review installed apps

  • Be suspicious about app's unexpected behaviors and permissions

 

Summary

  • The Marion County Record newsroom in Kansas was raided by police, who seized two cellphones, four computers, a backup hard drive, and reporting materials.

  • A computer seized was most likely unencrypted. Law enforcement officials hope that devices seized during a raid are unencrypted, as this makes them easier to examine.

  • Modern iPhones and Android phones are encrypted by default, but older devices may not be.

  • Desktop computers typically do not have encryption enabled by default, so it is important to turn this on manually.

  • Use strong random passwords and keep them in a password manager.

  • During the raid, police seized a single backup hard drive. It is important to have multiple backups of your data in case one is lost or stolen.

  • You can encrypt USB storage devices using BitLocker To Go on Windows, or Disk Utility on macOS.

  • All major desktop operating systems support Veracrypt, which can be used to encrypt entire drives.

Main Take-aways

  • Encrypt your devices, drives, and USBs.

  • Use strong random passwords and password manager.

  • Have multiple backups.

 

Paper & Examples

"Universal and Transferable Adversarial Attacks on Aligned Language Models." (https://llm-attacks.org/)

Summary

  • Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
  • Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs' responses.
  • These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
  • Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
  • The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
  • Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
view more: ‹ prev next ›