Tag Archives: about


The Mars Sample Return (MSR) campaign is an ambitious multi-year collaborative effort between NASA and the European Space Agency (ESA) to return scientifically selected rock and soil samples from Mars to Earth. Bringing samples back from Mars has been a priority goal of the planetary science community for decades as samples would provide a wealth of scientific information that cannot be obtained by current robotic surface missions or remote sensing from orbit. Analyzing the samples in advanced laboratories here on Earth has the potential to revolutionize our understanding of Mars and help answer key questions about the potential for life beyond Earth.

Perseverance’s role in the MSR campaign is to collect scientifically worthy rock and soil samples from Jezero Crater using its drill and sample caching system. Jezero Crater is a 28-mile wide basin located on the western edge of Isidis Planitia, just north of the Martian equator. Billions of years ago, Jezero was the site of an ancient lake filled by a river delta. Scientists believe this location preserves a rich geological record that could provide vital clues about the early climate and potential for life on Mars.

Perseverance carries 43 sample tubes that can each store one core sample about the size of a piece of chalk. Using its 7-foot long robotic arm, drill, and other instruments like cameras and spectrometers, Perseverance will identify and study geologically interesting rock formations and sedimentary layers that could contain traces of ancient microbial life or preserve a record of past environments like a lake. Under careful sterile conditions, Perseverance’s drill will then take core samples from selected rocks and the rover will transfer them to sealed tubes.

The carefully cached samples will then remain on the surface of Mars until a future MSR mission can retrieve them for return to Earth, hopefully within the next 10 years. Leaving the samples on the surface minimizes the risk of contaminating Earth with any Martian material and allows the scientific study of samples to happen under optimal laboratory conditions here with sophisticated equipment far beyond the capabilities of any Mars surface mission.

Perseverance began caching samples in its first session at “Rochette” in October 2021 and as of March 2022 had already cached 9 samples. It plans to continue collecting samples at Jezero Crater through at least 2033 to ensure the most scientifically compelling samples are returned to Earth for detailed analysis. The tubes will be deposited in carefully documented “cache” locations along the rover’s route so future missions know where to retrieve them. In total, Perseverance has the capability to cache up to 38 samples by the end of its prime mission.

The ambitious MSR architectural plan currently envisions three complex separate missions to retrieve and return the cached Perseverance samples. The first mission, currently targeted for launch in 2028, is the Mars Ascent Vehicle/Orbiting Sample (MAV/OS). This rocket and spacecraft combo would land near Perseverance’s cached samples, lift off from the Martian surface, and deploy the Sample Retrieval Lander containing the Mars Orbiting Sample canister.

The Sample Retrieval Lander would then touch down, deploy a small rover to retrieve the cache tubes left by Perseverance at the designated cache location(s), and transfer the samples to the Sample Orbiting Sample canister. The MAV would then lift back into Martian orbit where it would rendezvous with the orbiter and transfer the Sample Orbiting canister into the secure containment orbiting Mars.

The next critical MSR mission is the Earth Return Orbiter (ERO) launch, targeted for 2030. The ERO spacecraft would travel to Mars and capture the orbiting sample container left by the MAV/OS mission. The ERO would then depart Mars and begin the seven-month 230-million-mile trip back to Earth carrying the priceless samples. To prevent terrestrial contamination, the samples would remain sealed in the containment orbiter for re-entry.

The third mission planned is the Earth Entry Vehicle (EEV) targeted to launch in 2031. This mission would capture the returning ERO spacecraft and utilizing a capsule, heat shield, and parachutes, would safely land the sample containers in Utah’s west desert where scientists can extract the Mars samples under strict planetary protection protocols in new laboratories built specifically for this purpose.

The unprecedented MSR campaign has the potential to revolutionize our understanding of Mars and address questions that have intrigued scientists for generations like whether Mars ever supported microbial life. Careful caching by Perseverance and meticulous retrieval and return by the future MSR elements provides the best opportunity for scientific discovery while ensuring planetary protections. Perseverance’s diligent efforts at Jezero Crater to select and cache compelling rock core samples in its ambitious multi-year exploration leaves promising potential for future scientists to examine Martian treasures from the safety of Earth.


Telegram has experienced significant challenges with content moderation since its launch in 2013. As an encrypted messaging platform that promotes privacy and security, Telegram has had to balance those core values with removing illegal or dangerous content from its service.

One of the primary moderation challenges Telegram faces is due to its encryption and decentralized nature. Unlike many other messaging platforms, Telegram does not have the ability to directly access users’ messages since they are end-to-end encrypted. This means moderators cannot easily view private chats to detect rule-breaking content. Telegram can access and moderate public channels and groups, but its over 550 million users communicate via a mix of public and private groups and channels. The inability to view private communications hinders Telegram’s ability to proactively detect and remove illegal content.

Compounding this issue is the platform’s lack of centralized servers. While Telegram servers coordinate communication between users, actual message data and file storage is decentralized and distributed across multiple data centers around the world. This architecture was designed for robustness and to avoid single points of failure, but it also means content moderation requires coordination across many different legal jurisdictions. When illegal content is found, taking it down across all active data centers in a timely manner can be challenging.

Telegram’s mostly automated moderation also faces difficulties in understanding contextual nuances and intentions behind communications, which human moderators can more easily discern. Machine learning and AI tools used for filtering banned keywords or images still struggle with subtle forms of extremism, advocacy of violence, manipulation techniques, and other types of harmful but tacit communications. Overly broad filtering can also led to censorship of legitimate discussions. Achieving the right balance is an ongoing task for Telegram.

Laws and regulations around online content also differ greatly between countries and regions. Complying with these rules fully is nearly impossible given Telegram’s global user base and decentralized infrastructure. This has led to bans of Telegram in countries like China, Iran, and Indonesia over objections to Telegram’s perceived inability to moderate according to local laws. Geoblocking access or complying with takedown requests from a single nation also goes against Telegram’s goal of unfettered global communication.

Disinformation and coordinated manipulation campaigns have also proliferated on Telegram in recent years, employed for political and societal disruption. These “troll farms” and bots spread conspiracies, propaganda, and polarized narratives at scale. Authoritarian regimes have utilized Telegram in this way to stifle dissent. Identifying and countering sophisticated deception operations poses a substantial cat-and-mouse game for platforms like Telegram.

On the other side of these constraints are concerns about overreach and censorship. Users rightly value Telegram because of its strong defense of free expression and privacy. Where should the line be drawn between prohibited hate speech or harmful content versus open discussion? Banning certain movements or figures could also be seen as a political act depending on context. Balancing lawful moderation with preventing overreach is a nuanced high-wire act with no consensus on the appropriate approach.

The largely unregulated crypto community has also tested Telegram’s rules as scams, pump-and-dumps, and unlicensed financial services have proliferated on its channels. Enforcing compliance with securities laws across national borders with decentralized currencies raises thorny dilemmas. Again, the debate centers on protecting users versus limiting free commerce. There are rarely straightforward solutions.

Revenue generation to fund moderation efforts also introduces its challenges. Many see advertising as compromising Telegram’s values if content must be curated to appease sponsors. Paid subscriptions could gate harmful groups but also splinter communities. Finding a business model aligned with user privacy and trust presents barriers of its own.

In short, as a huge cross-border platform for private and public conversations, Telegram faces a multifaceted quagmire in content governance with no easy answers. Encryption, decentralization, jurisdictions, disinformation operations, regulation imbalances, cultural relativism, monetization, and an unwillingness to compromise core principles all complicate strategic decision making around moderation. It remains an open question as to how well Telegram can grapple with this complexity over the long run.

The barriers Telegram encounters in moderating its massive service span technical limitations, legal complexities across geographies and topics, resourcing challenges, and fundamental tensions between openness, harm reduction, compliance, and autonomy. These difficulties will likely persist without consensus on how to balance the trade-offs raised or revolutionary technological solutions. For now, Telegram can only continue refining incremental approaches via a combination of community guidelines, reactionary takedowns, and support for lawful oversight – all while staying true to its user-focused security model. This is a difficult road with no victors, only ongoing mitigation of harms as issues arise.


Scalability is one of the major issues blockchains need to address. As the number of transactions increases on a blockchain, the network can experience slower processing times and higher costs. The Bitcoin network, for example, can only process around 7 transactions per second due to the limitations of the proof-of-work consensus mechanism. In comparison, Visa processes around 1,700 transactions per second on average. The computational requirements of mining or validating new blocks also increases linearly as more nodes participate. This poses scalability challenges for blockchains to support widespread mainstream adoption.

A related issue is high transaction fees during periods of heavy network usage. When the Bitcoin network faces high transaction volume, users have to pay increasingly higher miner fees to get their transactions confirmed in a timely manner. This is not practical or feasible for small payment transactions. Ethereum has faced similar issues of high gas prices during times of network congestion as well. Achieving higher scalability through techniques such as sidechains, sharded architectures, and optimization of consensus algorithms is an active area of blockchain research and development.

Another challenge is slow transaction confirmation times, particularly for proof-of-work based blockchains. On average, it takes Bitcoin around 10 minutes to add a new block to the chain and confirm transactions. Other blockchains have even longer block times. For applications requiring real-time or near real-time transaction capabilities, such as retail payments, these delays are unacceptable. Fast confirmation is critical for providing a seamless experience to users. Achieving both security and speed is difficult, requiring alternative protocol optimizations.

Privacy and anonymity are lacking in today’s public blockchain networks. While transactions are pseudonymous, transaction amounts, balances, and addresses are publicly viewable by anyone. This lack of privacy has hindered the adoption of blockchain in industries that deal with sensitive data like healthcare and finance. New protocols will need to offer better privacy-preserving technologies like zero-knowledge proofs and anonymous transactions in order to meet regulatory standards across jurisdictions. Significant research progress must still be made in this area.

Security of decentralized applications also continues to remain challenging, with bugs and vulnerabilities commonly exploited if not implemented properly. Smart contracts are prone to attacks like reentrancy bugs and race conditions if not thoroughly stress tested, audited and secured. As blockchains lack centralized governance, vulnerabilities may persist for extended periods. Developers will need to focus more on security best practices from the start when designing decentralized applications, and users educated on associated risks.

Environmental sustainability is a concern for energy-intensive blockchains employing proof-of-work. The massive computational power required for mining on PoW networks like Bitcoin and Ethereum results in significant electricity usage that contributes to carbon emissions on a global scale. Estimates show the Bitcoin network alone uses more electricity annually than some medium-sized countries. Transition to alternative consensus mechanisms that consume less energy is a necessity for mass adoption. Many alternatives are still in development stages, however, and have not proven equal security guarantees as PoW so far.

Cross-chain interoperability has also been challenging, limiting the ability to transfer value and data between different blockchain networks in a secure and scalable manner. Enabling easy integration of separate blockchain ecosystems, platforms and applications through cross-chain bridges and protocols will be required to drive multi-faceted real-world usage. Various protocols are being worked on, such as Cosmos, Polkadot and Ethereum 2.0, but overall interoperability remains at a nascent stage still requiring further innovation, experimentation and maturation.

Lack of technical expertise in the blockchain field has delayed adoption. Blockchain technology remains relatively new and unfamiliar even to developers. Training and expanding the talent pool skilled in blockchain development, as well as raising cybersecurity proficiency overall, will play a crucial role in addressing challenges around scalability, privacy, security and advancing the core protocols. Increased knowledge transfer to academic institutions and the open-source community worldwide can help boost the foundation for further blockchain progress.

While significant advancements have been made in blockchain technology since Bitcoin’s creation over a decade ago, there are still several limitations preventing mainstream adoption at scale across industries. Continuous innovation is crucial to address the challenges of scalability, privacy, security, and other roadblocks through next-generation protocols and consensus mechanisms. Collaboration between the academic research community and blockchain developers will be integral to realize blockchain’s full transformational potential.


The capstone project is intended to be a culmination of the skills and knowledge gained throughout the Nanodegree program. It provides students an opportunity to demonstrate their proficiency and ability to independently develop and complete a project from concept to deployment using the tools and techniques learned.

To help guide students through this ambitious independent project, Udacity provides both mentorship support and a structured peer feedback system. Mentors are industry professionals who review student work and provide guidance to help ensure projects meet specifications and stay on track. Students also rely on feedback from their peers to improve their work before final submission.

Each student is assigned a dedicated capstone mentor from Udacity’s pool of experienced mentors at the start of the capstone. Mentors have deep expertise in the relevant technical field and have additionally received training from Udacity on providing constructive guidance and feedback. The role of the mentor is to review interim project work and hold check-in meetings to discuss challenges, evaluate progress, and offer targeted advice for improvement.

Mentors provide guidance on the design, implementation, and deployment of the project from the initial proposal, through standups and work-in-progress reviews. Students submit portions of their work—such as architecture diagrams, code samples, and prototypes—on a regular basis for mentor review. The mentor evaluates the work based on the program rubrics and provides written and verbal commentary. They look for demonstration of key skills and knowledge, adherence to best practices, and trajectory toward successful completion. Their goal is to steer students toward high-quality results through constructive criticism and suggestions.

For complex projects spanning several months, mentors typically scheduleindividual video conferences with each student every 1-2 weeks. These meetings allow for a more comprehensive check-in than written feedback alone. Students can then demonstrate live prototypes, discuss technical difficulties, and receive live coaching from their mentors. Meeting frequency may increase as project deadlines approach to ensure students stay on track. Mentors are also available via email or chat outside of formal meetings to answer any questions that come up.

In addition to mentor support, students provide peer feedback to their fellow classmates throughout the capstone. After each work-in-progress submission, students anonymously review two of their peers’ projects. They evaluate based on the same rubrics as the mentors and leave thoughtful written comments on project strengths and potential areas for improvement. Students integrate this outside perspective into further iterations of their work.

Peer feedback ensures diverse opinions beyond just the assigned mentor. It also allows students to practice evaluating projects themselves and learn from reviewing others’ work. Students have found peer feedback to be extremely valuable—seeing projects from an outside student perspective often surfaces new ideas. The feedback is also meant to be shaped as constructive suggestions rather than personal criticism.

Prior to final submission, students go through an internal “peer review” where they swap projects and conduct a deep code review with another classmate. This acts as a final checkpoint before projects are polished and submitted to the mentors for evaluation. Students find bugs, pinpoint potential improvements, and get another set of eyes to ensure their work is production-ready before the evaluation process begins.

The structured mentoring and peer review procedures employed during Nanodegree capstones are essential for guiding students through substantial self-directed projects. They allow for regular project monitoring, issues to surface early, and work to iteratively improve according to feedback. With support from both mentors and peers, students can confidently develop advanced skills and demonstrate their learning through a polished final portfolio project. The combination of human expertise and community input helps maximize the outcome of each student’s capstone experience.


The first step is to find an existing open source project that interests you and that you think you could potentially contribute value to. Some good places to search for open source projects include GitHub, SourceForge, GitLab, and similar platforms where many open source developers host and manage their code. You’ll want to browse through projects in areas that align with your skills and interests. Consider factors like the project’s activity level, number of open issues, how beginner-friendly it seems, and whether the codebase looks accessible enough for you to potentially make meaningful contributions as a new contributor.

Once you’ve identified a few potential projects, review their documentation to understand what types of contributions they are looking for and any guidelines they have for new contributors. Pay close attention to contribution guidelines and style guides, as following these properly will be important for having your code merged. You may also want to look at the project’s issue tracker to get a sense of common issues and potential ones you could help resolve. At this point, it’s a good idea to join the project’s communication channels like Slack or Discord if they have them to start to engage with core developers.

With a potential project in mind, the next step is to pick an issue or feature that interests you and seems achievable within the scope of a capstone. Review the issue description and any conversations thoroughly to fully understand what is being requested. You may need to ask clarifying questions in the issue. For enhancements or new features without an existing issue, you’ll need to provide a clear proposal in a new issue before beginning code work. Get explicit agreement that your proposed contribution would be a good fit for the project.

With an agreed upon task, you are ready to start coding! Be sure to fork the project’s repository to your own GitHub or other hosting account before making any code changes. As you work, document your process through comments in the code and updates in the applicable issue. Write thorough tests to validate your code works as intended. Check any style guides and follow the project’s code formatting and quality standards. Commit changes to your fork frequently with detailed, self-explanatory commit messages.

Once you have completed your task and tested your changes, you are ready to submit a pull request for review. A high-quality pull request is important, so take time to write a description clearly explaining your changes and how to test them. Request reviews from one or more core committers listed on the project. Be sure to address all feedback in the pull request conversations, even making additional commits if needed. Having an effective review process is important to learn from before the code is merged.

With all feedback addressed, the pull request is ready for final merging once all reviewers have approved. Celebrate your first open source contribution! Consider additional issues you could take on, or ways to otherwise continue engaging with and supporting the community. You’ll want to document your experience contributing to the open source project as part of your capstone paper or report. Highlight what you learned, challenges you overcame, and how contributing aligns with your academic and career interests and goals going forward.

Maintaining a good relationship with the open source project you contributed to can be valuable for references or future collaboration opportunities. Continue engaging on communication channels, consider taking on more significant issues, or potentially helping with overall project management tasks if your contributions are appreciated. Promoting your work on social media is also an excellent way to demonstrate your skills and experience to potential employers.

Contributing to an open source project can be a highly rewarding learning experience when done right. Taking the time to thoughtfully select a project, clearly define the scope of your work, communicate effectively, and thoroughly test your code will serve you well throughout your software development career. It’s a process that takes patience but pays off in learning valuable new skills that can also be highlighted on your resume or capstone. With practice, contributing to open source can become very natural ways to both learn and give back to the community.