title
stringlengths 2
124
| source
stringlengths 39
44
| id
int64 8.14k
35.2M
| text
stringlengths 101
6.85k
|
---|---|---|---|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,136 |
Android is a mobile operating system based on a modified version of the Linux kernel and other open-source software, designed primarily for touchscreen mobile devices such as smartphones and tablets. Android is developed by a consortium of developers known as the Open Handset Alliance and commercially sponsored by Google. It was unveiled in November 2007, with the first commercial Android device, the HTC Dream, being launched in September 2008. Most versions of Android are proprietary. The core components are taken from the Android Open Source Project (AOSP), which is free and open-source software (FOSS) primarily licensed under the Apache License. When Android is installed on devices, the ability to modify the otherwise free and open-source software is usually restricted, either by not providing the corresponding source code or by preventing reinstallation through technical measures, thus rendering the installed version proprietary. Most Android devices ship with additional proprietary software pre-installed, most notably Google Mobile Services (GMS) which includes core apps such as Google Chrome, the digital distribution platform Google Play, and the associated Google Play Services development platform. Over 70 percent of Android smartphones run Google's ecosystem, some with vendor-customized user interfaces and software suites, such as TouchWiz and later One UI by Samsung and HTC Sense. Competing Android ecosystems and forks include Fire OS (developed by Amazon), ColorOS by OPPO, OriginOS by Vivo, MagicUI by Honor, or custom ROMs such as LineageOS. However, the "Android" name and logo are trademarks of Google, which imposes standards to restrict the use of Android branding by "uncertified" devices outside their ecosystem. The source code has been used to develop variants of Android on a range of other electronics, such as game consoles, digital cameras, portable media players, and PCs, each with a specialized user interface. Some well known derivatives include Android TV for televisions and Wear OS for wearables, both developed by Google. Software packages on Android, which use the APK format, are generally distributed through proprietary application stores like Google Play Store, Amazon Appstore (including for Windows 11), Samsung Galaxy Store, Huawei AppGallery, Cafe Bazaar, and GetJar, or open source platforms like Aptoide or F-Droid. Android has been the best-selling OS worldwide on smartphones since 2011 and on tablets since 2013. , it had over three billion monthly active users, the largest installed base of any operating system, and , the Google Play Store featured over 3 million apps. Android 13, released on August 15, 2022, is the latest version, and the recently released Android 12.1/12L includes improvements specific to foldable phones, tablets, desktop-sized screens and Chromebooks.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,141 |
Android Inc. was founded in Palo Alto, California, in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as having "tremendous potential in developing smarter mobile devices that are more aware of its owner's location and preferences". The early intentions of the company were to develop an advanced operating system for digital cameras, and this was the basis of its pitch to investors in April 2004. The company then decided that the market for cameras was not large enough for its goals, and five months later it had diverted its efforts and was pitching Android as a handset operating system that would rival Symbian and Microsoft Windows Mobile. Rubin had difficulty attracting investors early on, and Android was facing eviction from its office space. Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope, and shortly thereafter wired an undisclosed amount as seed funding. Perlman refused a stake in the company, and has stated "I did it because I believed in the thing, and I wanted to help Andy." In 2005, Rubin tried to negotiate deals with Samsung and HTC. Shortly afterwards, Google acquired the company in July of that year for at least $50 million; this was Google's "best deal ever" according to Google's then-vice president of corporate development, David Lawee, in 2010. Android's key employees, including Rubin, Miner, Sears, and White, joined Google as part of the acquisition. Not much was known about the secretive Android Inc. at the time, with the company having provided few details other than that it was making software for mobile phones. At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system. Google had "lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation". Speculation about Google's intention to enter the mobile communications market continued to build through December 2006. An early prototype had a close resemblance to a BlackBerry phone, with no touchscreen and a physical QWERTY keyboard, but the arrival of 2007's Apple iPhone meant that Android "had to go back to the drawing board". Google later changed its Android specification documents to state that "Touchscreens will be supported", although "the Product was designed with the presence of discrete physical buttons as an assumption, therefore a touchscreen cannot completely replace physical buttons". By 2008, both Nokia and BlackBerry announced touch-based smartphones to rival the iPhone 3G, and Android's focus eventually switched to just touchscreens. The first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23, 2008.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,145 |
On November 5, 2007, the Open Handset Alliance, a consortium of technology companies including Google, device manufacturers such as HTC, Motorola and Samsung, wireless carriers such as Sprint and T-Mobile, and chipset makers such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop "the first truly open and comprehensive platform for mobile devices". Within a year, the Open Handset Alliance faced two other open source competitors, the Symbian Foundation and the LiMo Foundation, the latter also developing a Linux-based mobile operating system like Google. In September 2007, "InformationWeek" covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony. Since 2008, Android has seen numerous updates which have incrementally improved the operating system, adding new features and fixing bugs in previous releases. Each major release is named in alphabetical order after a dessert or sugary treat, with the first few Android versions being called "Cupcake", "Donut", "Eclair", and "Froyo", in that order. During its announcement of Android KitKat in 2013, Google explained that "Since these devices make our lives so sweet, each Android version is named after a dessert", although a Google spokesperson told CNN in an interview that "It's kind of like an internal team thing, and we prefer to be a little bit—how should I say—a bit inscrutable in the matter, I'll say". In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions. The series was described as having "played a pivotal role in Android's history by introducing new software iterations and hardware standards across the board", and became known for its "bloat-free" software with "timely ... updates". At its developer conference in May 2013, Google announced a special version of the Samsung Galaxy S4, where, instead of using Samsung's own Android customization, the phone ran "stock Android" and was promised to receive new system updates fast. The device would become the start of the Google Play edition program, and was followed by other devices, including the HTC One Google Play edition, and Moto G Google Play edition. In 2015, "Ars Technica" wrote that "Earlier this week, the last of the Google Play edition Android phones in Google's online storefront were listed as "no longer available for sale" and that "Now they're all gone, and it looks a whole lot like the program has wrapped up".
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,148 |
From 2008 to 2013, Hugo Barra served as product spokesperson, representing Android at press conferences and Google I/O, Google's annual developer-focused conference. He left Google in August 2013 to join Chinese phone maker Xiaomi. Less than six months earlier, Google's then-CEO Larry Page announced in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google, and that Sundar Pichai would become the new Android lead. Pichai himself would eventually switch positions, becoming the new CEO of Google in August 2015 following the company's restructure into the Alphabet conglomerate, making Hiroshi Lockheimer the new head of Android. On Android 4.4 "Kit Kat", shared writing access to MicroSD memory cards has been locked for user-installed applications, to which only the dedicated directories with respective package names, located inside codice_1, remained writeable. Writing access has been reinstated with Android 5 "Lollipop" through the backwards-incompatible "Google Storage Access Framework" interface. In June 2014, Google announced Android One, a set of "hardware reference models" that would "allow [device makers] to easily create high-quality phones at low costs", designed for consumers in developing countries. In September, Google announced the first set of Android One phones for release in India. However, "Recode" reported in June 2015 that the project was "a disappointment", citing "reluctant consumers and manufacturing partners" and "misfires from the search company that has never quite cracked hardware". Plans to relaunch Android One surfaced in August 2015, with Africa announced as the next location for the program a week later. A report from "The Information" in January 2017 stated that Google is expanding its low-cost Android One program into the United States, although "The Verge" notes that the company will presumably not produce the actual devices itself. Google introduced the Pixel and Pixel XL smartphones in October 2016, marketed as being the first phones made by Google, and exclusively featured certain software features, such as the Google Assistant, before wider rollout. The Pixel phones replaced the Nexus series, with a new generation of Pixel phones launched in October 2017. In May 2019, the operating system became entangled in the trade war between China and the United States involving Huawei, which, like many other tech firms, had become dependent on access to the Android platform. In the summer of 2019, Huawei announced it would create an alternative operating system to Android known as Harmony OS, and has filed for intellectual property rights across major global markets. Under such sanctions Huawei has long-term plans to replace Android in 2022 with the new operating system, as Harmony OS was originally designed for internet of things devices, rather than for smartphones and tablets.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,152 |
On August 22, 2019, it was announced that Android "Q" would officially be branded as Android 10, ending the historic practice of naming major versions after desserts. Google stated that these names were not "inclusive" to international users (due either to the aforementioned foods not being internationally known, or being difficult to pronounce in some languages). On the same day, "Android Police" reported that Google had commissioned a statue of a giant number "10" to be installed in the lobby of the developers' new office. Android 10 was released on September 3, 2019, to Google Pixel phones first. In late 2021, some users reported that they were unable to dial emergency services. The problem was caused by a combination of bugs in Android and in the Microsoft Teams app; both companies released updates addressing the issue. Android's default user interface is mainly based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, along with a virtual keyboard. Game controllers and full-size physical keyboards are supported via Bluetooth or USB. The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware, such as accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel. Android devices boot to the home screen, the primary navigation and information "hub" on Android devices, analogous to the desktop found on personal computers. Android home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as a weather forecast, the user's email inbox, or a news ticker directly on the home screen. A home screen may be made up of several pages, between which the user can swipe back and forth. Third-party apps available on Google Play and other app stores can extensively re-theme the home screen, and even mimic the look of other operating systems, such as Windows Phone. Most manufacturers customize the look and features of their Android devices to differentiate themselves from their competitors. Along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be pulled (swiped) down from to reveal a notification screen where apps display important information or updates, as well as quick access to system controls and toggles such as display brightness, connectivity settings (WiFi, Bluetooth, cellular data), audio mode, and flashlight. Vendors may implement extended settings such as the ability to adjust the flashlight brightness.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,157 |
Notifications are "short, timely, and relevant information about your app when it's not in use", and when tapped, users are directed to a screen inside the app relating to the notification. Beginning with Android 4.1 "Jelly Bean", "expandable notifications" allow the user to tap an icon on the notification in order for it to expand and display more information and possible app actions right from the notification. An "All Apps" screen lists all installed applications, with the ability for users to drag an app from the list onto the home screen. The app list may be accessed using a gesture or a button, depending on the Android version. A "Recents" screen, also known as "Overview", lets users switch between recently used apps. The recent list may appear side-by-side or overlapping, depending on the Android version and manufacturer. Many early Android OS smartphones were equipped with a dedicated search button for quick access to a web search engine and individual apps' internal search feature. More recent devices typically allow the former through a long press or swipe away from the home button. The dedicated option key, also known as menu key, and its on-screen simulation, is no longer supported since Android version 10. Google recommends mobile application developers to locate menus within the user interface. On more recent phones, its place is occupied by a task key used to access the list of recently used apps when actuated. Depending on device, its long press may simulate a menu button press or engage split screen view, the latter of which is the default behaviour since stock Android version 7. The earliest vendor-customized Android-based smartphones known to have featured a split-screen view mode are the 2012 Samsung Galaxy S3 and Note 2, the former of which received this feature with the "premium suite" upgrade delivered in TouchWiz with Android 4.1 Jelly Bean. When connecting or disconnecting charging power and when shortly actuating the power button or home button, all while the device is powered off, a visual battery meter whose appearance varies among vendors appears on the screen, allowing the user to quickly assess the charge status of a powered-off without having to boot it up first. Some display the battery percentage. Many, to almost all, Android devices come with preinstalled Google apps including Gmail, Google Maps, Google Chrome, YouTube, Google Play Music, Google Play Movies & TV, and many more. Applications ("apps"), which extend the functionality of devices (and must be 64-bit), are written using the Android software development kit (SDK) and, often, Kotlin programming language, which replaced Java as Google's preferred language for Android app development in May 2019, and was originally announced in May 2017. Java is still supported (originally the only option for user-space programs, and is often mixed with Kotlin), as is C++. Java or other JVM languages, such as Kotlin, may be combined with C/C++, together with a choice of non-default runtimes that allow better C++ support. The Go programming language is also supported, although with a limited set of application programming interfaces (API).
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,166 |
The SDK includes a comprehensive set of development tools, including a debugger, software libraries, a handset emulator based on QEMU, documentation, sample code, and tutorials. Initially, Google's supported integrated development environment (IDE) was Eclipse using the Android Development Tools (ADT) plugin; in December 2014, Google released Android Studio, based on IntelliJ IDEA, as its primary IDE for Android application development. Other development tools are available, including a native development kit (NDK) for applications or extensions in C or C++, Google App Inventor, a visual environment for novice programmers, and various cross platform mobile web applications frameworks. In January 2014, Google unveiled a framework based on Apache Cordova for porting Chrome HTML 5 web applications to Android, wrapped in a native application shell. Additionally, Firebase was acquired by Google in 2014 that provides helpful tools for app and web developers. Android has a growing selection of third-party applications, which can be acquired by users by downloading and installing the application's APK (Android application package) file, or by downloading them using an application store program that allows users to install, update, and remove applications from their devices. Google Play Store is the primary application store installed on Android devices that comply with Google's compatibility requirements and license the Google Mobile Services software. Google Play Store allows users to browse, download and update applications published by Google and third-party developers; , there are more than three million applications available for Android in Play Store. , 50 billion application installations had been performed. Some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user's monthly bill. , there are over one billion active users a month for Gmail, Android, Chrome, Google Play and Maps. Due to the open nature of Android, a number of third-party application marketplaces also exist for Android, either to provide a substitute for devices that are not allowed to ship with Google Play Store, provide applications that cannot be offered on Google Play Store due to policy violations, or for other reasons. Examples of these third-party stores have included the Amazon Appstore, GetJar, and SlideMe. F-Droid, another alternative marketplace, seeks to only provide applications that are distributed under free and open source licenses. In October 2020, Google removed several Android applications from Play Store, as they were identified breaching its data collection rules. The firm was informed by International Digital Accountability Council (IDAC) that apps for children like "Number Coloring", "Princess Salon" and "Cats & Cosplay", with collective downloads of 20 million, were violating Google's policies.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,170 |
At the Windows 11 announcement event in June 2021, Microsoft showcased the new Windows Subsystem for Android (WSA) that will enable support for the Android Open Source Project (AOSP) and will allow users to run Android apps on their Windows desktop. The storage of Android devices can be expanded using secondary devices such as SD cards. Android recognizes two types of secondary storage: "portable" storage (which is used by default), and "adoptable" storage. Portable storage is treated as an external storage device. Adoptable storage, introduced on Android 6.0, allows the internal storage of the device to be spanned with the SD card, treating it as an extension of the internal storage. This has the disadvantage of preventing the memory card from being used with another device unless it is reformatted. Android 4.4 introduced the Storage Access Framework (SAF), a set of APIs for accessing files on the device's filesystem. As of Android 11, Android has required apps to conform to a data privacy policy known as "scoped storage", under which apps may only automatically have access to certain directories (such as those for pictures, music, and video), and app-specific directories they have created themselves. Apps are required to use the SAF to access any other part of the filesystem. Since Android devices are usually battery-powered, Android is designed to manage processes to keep power consumption at a minimum. When an application is not in use the system suspends its operation so that, while available for immediate use rather than closed, it does not use battery power or CPU resources. Android manages the applications stored in memory automatically: when memory is low, the system will begin invisibly and automatically closing inactive processes, starting with those that have been inactive for the longest amount of time. Lifehacker reported in 2011 that third-party task-killer applications were doing more harm than good. Some settings for use by developers for debugging and power users are located in a "Developer options" sub menu, such as the ability to highlight updating parts of the display, show an overlay with the current status of the touch screen, show touching spots for possible use in screencasting, notify the user of unresponsive background processes with the option to end them ("Show all ANRs", i.e. "App's Not Responding"), prevent a Bluetooth audio client from controlling the system volume ("Disable absolute volume"), and adjust the duration of transition animations or deactivate them completely to speed up navigation.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,175 |
Developer options are initially hidden since Android 4.2 "Jelly Bean", but can be enabled by actuating the operating system's build number in the device information seven times. Hiding developers options again requires deleting user data for the "Settings" app, possibly resetting some other preferences. The main hardware platform for Android is ARM (the ARMv7 and ARMv8-A architectures), with x86 and x86-64 architectures also officially supported in later versions of Android. The unofficial Android-x86 project provided support for x86 architectures ahead of the official support. Since 2012, Android devices with Intel processors began to appear, including phones and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then on ARM64. An unofficial experimental port of the operating system to the RISC-V architecture was released in 2021. Requirements for the minimum amount of RAM for devices running Android 7.1 range from in practice 2 GB for best hardware, down to 1 GB for the most common screen. Android supports all versions of OpenGL ES and Vulkan (and version 1.1 available for some devices). Android devices incorporate many optional hardware components, including still or video cameras, GPS, orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers, magnetometers, proximity sensors, pressure sensors, thermometers, and touchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional. Android used to require an autofocus camera, which was relaxed to a fixed-focus camera if present at all, since the camera was dropped as a requirement entirely when Android started to be used on set-top boxes. In addition to running on smartphones and tablets, several vendors run Android natively on regular PC hardware with a keyboard and mouse. In addition to their availability on commercially available hardware, similar PC hardware-friendly versions of Android are freely available from the Android-x86 project, including customized Android 4.4. Using the Android emulator that is part of the Android SDK, or third-party emulators, Android can also run non-natively on x86 architectures. Chinese companies are building a PC and mobile operating system, based on Android, to "compete directly with Microsoft Windows and Google Android". The Chinese Academy of Engineering noted that "more than a dozen" companies were customizing Android following a Chinese ban on the use of Windows 8 on government PCs.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,180 |
Android is developed by Google until the latest changes and updates are ready to be released, at which point the source code is made available to the Android Open Source Project (AOSP), an open source initiative led by Google.The first source code release happened as part of the initial release in 2007. All releases are under the Apache License. The AOSP code can be found with minimal modifications on select devices, mainly the former Nexus and current Android One series of devices. However, most original equipment manufacturers (OEMs) customize the source code to run on their hardware. Android's source code does not contain the device drivers, often proprietary, that are needed for certain hardware components, and does not contain the source code of Google Play Services, which many apps depend on. As a result, most Android devices, including Google's own, ship with a combination of free and open source and proprietary software, with the software required for accessing Google services falling into the latter category. In response to this, there are some projects that build complete operating systems based on AOSP as free software, the first being CyanogenMod (see section Open-source community below). Google provides annual Android releases, both for factory installation in new devices, and for over-the-air updates to existing devices. The latest major release is Android 13. The extensive variation of hardware in Android devices has caused significant delays for software upgrades and security patches. Each upgrade has had to be specifically tailored, a time- and resource-consuming process. Except for devices within the Google Nexus and Pixel brands, updates have often arrived months after the release of the new version, or not at all. Manufacturers often prioritize their newest devices and leave old ones behind. Additional delays can be introduced by wireless carriers who, after receiving updates from manufacturers, further customize Android to their needs and conduct extensive testing on their networks before sending out the upgrade. There are also situations in which upgrades are impossible due to a manufacturer not updating necessary drivers. The lack of after-sale support from manufacturers and carriers has been widely criticized by consumer groups and the technology media. Some commentators have noted that the industry has a financial incentive not to upgrade their devices, as the lack of updates for existing devices fuels the purchase of newer ones, an attitude described as "insulting". "The Guardian" complained that the method of distribution for updates is complicated only because manufacturers and carriers have designed it that way. In 2011, Google partnered with a number of industry players to announce an "Android Update Alliance", pledging to deliver timely updates for every device for 18 months after its release; however, there has not been another official word about that alliance since its announcement.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,186 |
In 2012, Google began de-coupling certain aspects of the operating system (particularly its central applications) so they could be updated through the Google Play store independently of the OS. One of those components, Google Play Services, is a closed-source system-level process providing APIs for Google services, installed automatically on nearly all devices running Android 2.2 "Froyo" and higher. With these changes, Google can add new system functions and update apps without having to distribute an upgrade to the operating system itself. As a result, Android 4.2 and 4.3 "Jelly Bean" contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements. HTC's then-executive Jason Mackenzie called monthly security updates "unrealistic" in 2015, and Google was trying to persuade carriers to exclude security patches from the full testing procedures. In May 2016, Bloomberg Businessweek reported that Google was making efforts to keep Android more up-to-date, including accelerated rates of security updates, rolling out technological workarounds, reducing requirements for phone testing, and ranking phone makers in an attempt to "shame" them into better behavior. As stated by "Bloomberg": "As smartphones get more capable, complex and hackable, having the latest software work closely with the hardware is increasingly important". Hiroshi Lockheimer, the Android lead, admitted that "It's not an ideal situation", further commenting that the lack of updates is "the weakest link on security on Android". Wireless carriers were described in the report as the "most challenging discussions", due to their slow approval time while testing on their networks, despite some carriers, including Verizon Wireless and Sprint Corporation, already shortening their approval times. In a further effort for persuasion, Google shared a list of top phone makers measured by updated devices with its Android partners, and is considering making the list public. Mike Chan, co-founder of phone maker Nextbit and former Android developer, said that "The best way to solve this problem is a massive re-architecture of the operating system", "or Google could invest in training manufacturers and carriers 'to be good Android citizens. In May 2017, with the announcement of Android 8.0, Google introduced Project Treble, a major re-architect of the Android OS framework designed to make it easier, faster, and less costly for manufacturers to update devices to newer versions of Android. Project Treble separates the vendor implementation (device-specific, lower-level software written by silicon manufacturers) from the Android OS framework via a new "vendor interface". In Android 7.0 and earlier, no formal vendor interface exists, so device makers must update large portions of the Android code to move a device to a newer version of the operating system. With Treble, the new stable vendor interface provides access to the hardware-specific parts of Android, enabling device makers to deliver new Android releases simply by updating the Android OS framework, "without any additional work required from the silicon manufacturers."
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,189 |
In September 2017, Google's Project Treble team revealed that, as part of their efforts to improve the security lifecycle of Android devices, Google had managed to get the Linux Foundation to agree to extend the support lifecycle of the Linux Long-Term Support (LTS) kernel branch from the 2 years that it has historically lasted to 6 years for future versions of the LTS kernel, starting with Linux kernel 4.4. In May 2019, with the announcement of Android 10, Google introduced Project Mainline to simplify and expedite delivery of updates to the Android ecosystem. Project Mainline enables updates to core OS components through the Google Play Store. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update. Google reported rolling out new amendments in Android 12 aimed at making the use of third-party application stores easier. This announcement rectified the concerns reported regarding the development of Android apps, including a fight over an alternative in-app payment system and difficulties faced by businesses moving online because of COVID-19. Android's kernel is based on the Linux kernel's long-term support (LTS) branches. , Android uses versions 4.14, 4.19 or 5.4 of the Linux kernel. The actual kernel depends on the individual device. Android's variant of the Linux kernel has further architectural changes that are implemented by Google outside the typical Linux kernel development cycle, such as the inclusion of components like device trees, ashmem, ION, and different out of memory (OOM) handling. Certain features that Google contributed back to the Linux kernel, notably a power management feature called "wakelocks", were initially rejected by mainline kernel developers partly because they felt that Google did not show any intent to maintain its own code. Google announced in April 2010 that they would hire two employees to work with the Linux kernel community, but Greg Kroah-Hartman, the current Linux kernel maintainer for the stable branch, said in December 2010 that he was concerned that Google was no longer trying to get their code changes included in mainstream Linux. Google engineer Patrick Brady once stated in the company's developer conference that "Android is not Linux", with "Computerworld" adding that "Let me make it simple for you, without Linux, there is no Android". "Ars Technica" wrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack".
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,194 |
In August 2011, Linus Torvalds said that "eventually Android and Linux would come back to a common kernel, but it will probably not be for four to five years". In December 2011, Greg Kroah-Hartman announced the start of Android Mainlining Project, which aims to put some Android drivers, patches and features back into the Linux kernel, starting in Linux 3.3. Linux included the autosleep and wakelocks capabilities in the 3.5 kernel, after many previous attempts at a merger. The interfaces are the same but the upstream Linux implementation allows for two different suspend modes: to memory (the traditional suspend that Android uses), and to disk (hibernate, as it is known on the desktop). Google maintains a public code repository that contains their experimental work to re-base Android off the latest stable Linux versions. Android is a Linux distribution according to the Linux Foundation, Google's open-source chief Chris DiBona, and several journalists. Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditional Unix-like Linux distribution sense; Android does not include the GNU C Library (it uses Bionic as an alternative C library) and some other components typically found in Linux distributions. With the release of Android Oreo in 2017, Google began to require that devices shipped with new SoCs had Linux kernel version 4.4 or newer, for security reasons. Existing devices upgraded to Oreo, and new products launched with older SoCs, were exempt from this rule. The flash storage on Android devices is split into several partitions, such as codice_2 for the operating system itself, and codice_3 for user data and application installations. In contrast to typical desktop Linux distributions, Android device owners are not given root access to the operating system and sensitive partitions such as codice_2 are read-only. However, root access can be obtained by exploiting security flaws in Android, which is used frequently by the open-source community to enhance the capabilities and customizability of their devices, but also by malicious parties to install viruses and malware. Root access can also be obtained by unlocking the bootloader via the codice_5 option on certain devices including most Google Pixel and OnePlus models. The unlocking process resets the system to factory state, erasing all user data. On top of the Linux kernel, there are the middleware, libraries and APIs written in C, and application software running on an application framework which includes Java-compatible libraries. Development of the Linux kernel continues independently of Android's other source code projects.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,200 |
Android uses Android Runtime (ART) as its runtime environment (introduced in version 4.4), which uses ahead-of-time (AOT) compilation to entirely compile the application bytecode into machine code upon the installation of an application. In Android 4.4, ART was an experimental feature and not enabled by default; it became the only runtime option in the next major version of Android, 5.0. In versions no longer supported, until version 5.0 when ART took over, Android previously used Dalvik as a process virtual machine with trace-based just-in-time (JIT) compilation to run Dalvik "dex-code" (Dalvik Executable), which is usually translated from the Java bytecode. Following the trace-based JIT principle, in addition to interpreting the majority of application code, Dalvik performs the compilation and native execution of select frequently executed code segments ("traces") each time an application is launched. For its Java library, the Android platform uses a subset of the now discontinued Apache Harmony project. In December 2015, Google announced that the next version of Android would switch to a Java implementation based on the OpenJDK project. Android's standard C library, Bionic, was developed by Google specifically for Android, as a derivation of the BSD's standard C library code. Bionic itself has been designed with several major features specific to the Linux kernel. The main benefits of using Bionic instead of the GNU C Library (glibc) or uClibc are its smaller runtime footprint, and optimization for low-frequency CPUs. At the same time, Bionic is licensed under the terms of the BSD licence, which Google finds more suitable for the Android's overall licensing model. Aiming for a different licensing model, toward the end of 2012, Google switched the Bluetooth stack in Android from the GPL-licensed BlueZ to the Apache-licensed BlueDroid. A new Bluetooth stack, called Gabeldorsche, was developed to try to fix the bugs in the BlueDroid implementation. Android does not have a native X Window System by default, nor does it support the full set of standard GNU libraries. This made it difficult to port existing Linux applications or libraries to Android, until version r5 of the Android Native Development Kit brought support for applications written completely in C or C++. Libraries written in C may also be used in applications by injection of a small shim and usage of the JNI.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,205 |
In current versions of Android, "Toybox", a collection of command-line utilities (mostly for use by apps, as Android does not provide a command-line interface by default), is used (since the release of Marshmallow) replacing a similar "Toolbox" collection found in previous Android versions. Android has another operating system, Trusty OS, within it, as a part of "Trusty" "software components supporting a Trusted Execution Environment (TEE) on mobile devices." "Trusty and the Trusty API are subject to change. [..] Applications for the Trusty OS can be written in C/C++ (C++ support is limited), and they have access to a small C library. [..] All Trusty applications are single-threaded; multithreading in Trusty userspace currently is unsupported. [..] Third-party application development is not supported in" the current version, and software running on the OS and processor for it, run the "DRM framework for protected content. [..] There are many other uses for a TEE such as mobile payments, secure banking, full-disk encryption, multi-factor authentication, device reset protection, replay-protected persistent storage, wireless display ("cast") of protected content, secure PIN and fingerprint processing, and even malware detection." Android's source code is released by Google under an open source license, and its open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which deliver updates to older devices, add new features for advanced users or bring Android to devices originally shipped with other operating systems. These community-developed releases often bring new features and updates to devices faster than through the official manufacturer/carrier channels, with a comparable level of quality; provide continued support for older devices that no longer receive official updates; or bring Android to devices that were officially released running other operating systems, such as the HP TouchPad. Community releases often come pre-rooted and contain modifications not provided by the original vendor, such as the ability to overclock or over/undervolt the device's processor. CyanogenMod was the most widely used community firmware, now discontinued and succeeded by LineageOS. There are, as of August 2019, a handful of notable custom Android distributions (ROMs) of the latest Android version 9.0 Pie, which was released publicly in August 2018. See "List of custom Android distributions".
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,209 |
Historically, device manufacturers and mobile carriers have typically been unsupportive of third-party firmware development. Manufacturers express concern about improper functioning of devices running unofficial software and the support costs resulting from this. Moreover, modified firmware such as CyanogenMod sometimes offer features, such as tethering, for which carriers would otherwise charge a premium. As a result, technical obstacles including locked bootloaders and restricted access to root permissions are common in many devices. However, as community-developed software has grown more popular, and following a statement by the Librarian of Congress in the United States that permits the "jailbreaking" of mobile devices, manufacturers and carriers have softened their position regarding third party development, with some, including HTC, Motorola, Samsung and Sony, providing support and encouraging development. As a result of this, over time the need to circumvent hardware restrictions to install unofficial firmware has lessened as an increasing number of devices are shipped with unlocked or unlockable bootloaders, similar to Nexus series of phones, although usually requiring that users waive their devices' warranties to do so. However, despite manufacturer acceptance, some carriers in the US still require that phones are locked down, frustrating developers and customers. Internally, Android identifies each supported device by its device codename, a short string, which may or may not be similar to the model name used in marketing the device. For example, the device codename of the Pixel smartphone is "sailfish". The device codename is usually not visible to the end user, but is important for determining compatibility with modified Android versions. It is sometimes also mentioned in articles discussing a device, because it allows to distinguish different hardware variants of a device, even if the manufacturer offers them under the same name. The device codename is available to running applications under codice_6. In 2020, Google launched the Android Partner Vulnerability Initiative to improve the security of Android. They also formed an Android security team. Research from security company Trend Micro lists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user. Other malware displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorised third parties. Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is being exaggerated by security companies for commercial reasons, and have accused the security industry of playing on fears to sell virus protection software to users. Google maintains that dangerous malware is actually extremely rare, and a survey conducted by F-Secure showed that only 0.5% of Android malware reported had come from the Google Play store.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,214 |
In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect both iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software. Both of these issues are not addressed or cannot be addressed by security patches. As part of the broader 2013 mass surveillance disclosures it was revealed in September 2013 that the American and British intelligence agencies, the National Security Agency (NSA) and Government Communications Headquarters (GCHQ), respectively, have access to the user data on iPhone, BlackBerry, and Android devices. They are reportedly able to read almost all smartphone information, including SMS, location, emails, and notes. In January 2014, further reports revealed the intelligence agencies' capabilities to intercept the personal information transmitted across the Internet by social networks and other popular applications such as "Angry Birds", which collect personal information of their users for advertising and other commercial reasons. GCHQ has, according to "The Guardian", a wiki-style guide of different apps and advertising networks, and the different data that can be siphoned from each. Later that week, the Finnish Angry Birds developer Rovio announced that it was reconsidering its relationships with its advertising platforms in the light of these revelations, and called upon the wider industry to do the same. The documents revealed a further effort by the intelligence agencies to intercept Google Maps searches and queries submitted from Android and other smartphones to collect location information in bulk. The NSA and GCHQ insist their activities comply with all relevant domestic and international laws, although the Guardian stated "the latest disclosures could also add to mounting public concern about how the technology sector collects and uses information, especially for those outside the US, who enjoy fewer privacy protections than Americans." Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including Android). In August 2015, Google announced that devices in the Google Nexus series would begin to receive monthly security patches. Google also wrote that "Nexus devices will continue to receive major updates for at least two years and security patches for the longer of three years from initial availability or 18 months from last sale of the device via the Google Store." The following October, researchers at the University of Cambridge concluded that 87.7% of Android phones in use had known but unpatched security vulnerabilities due to lack of updates and support. Ron Amadeo of "Ars Technica" wrote also in August 2015 that "Android was originally designed, above all else, to be widely adopted. Google was starting from scratch with zero percent market share, so it was happy to give up control and give everyone a seat at the table in exchange for adoption. [...] Now, though, Android has around 75–80 percent of the worldwide smartphone market—making it not just the world's most popular mobile operating system but arguably the most popular operating system, period. As such, security has become a big issue. Android still uses a software update chain-of-command designed back when the Android ecosystem had zero devices to update, and it just doesn't work". Following news of Google's monthly schedule, some manufacturers, including Samsung and LG, promised to issue monthly security updates, but, as noted by Jerry Hildenbrand in "Android Central" in February 2016, "instead we got a few updates on specific versions of a small handful of models. And a bunch of broken promises".
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,219 |
In a March 2017 post on Google's Security Blog, Android security leads Adrian Ludwig and Mel Miller wrote that "More than 735 million devices from 200+ manufacturers received a platform security update in 2016" and that "Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016". They also wrote that "About half of devices in use at the end of 2016 had not received a platform security update in the previous year", stating that their work would continue to focus on streamlining the security updates program for easier deployment by manufacturers. Furthermore, in a comment to "TechCrunch", Ludwig stated that the wait time for security updates had been reduced from "six to nine weeks down to just a few days", with 78% of flagship devices in North America being up-to-date on security at the end of 2016. Patches to bugs found in the core operating system often do not reach users of older and lower-priced devices. However, the open-source nature of Android allows security contractors to take existing devices and adapt them for highly secure uses. For example, Samsung has worked with General Dynamics through their Open Kernel Labs acquisition to rebuild "Jelly Bean" on top of their hardened microvisor for the "Knox" project. Android smartphones have the ability to report the location of Wi-Fi access points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps like Foursquare, Google Latitude, Facebook Places, and to deliver location-based ads. Third party monitoring software such as TaintDroid, an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers. In 2018, Norwegian security firm Promon has unearthed a serious Android security hole which can be exploited to steal login credentials, access messages, and track location, which could be found in all versions of Android, including Android 10. The vulnerability came by exploiting a bug in the multitasking system enabling a malicious app to overlay legitimate apps with fake login screens that users are not aware of when handing in security credentials. Users can also be tricked into granting additional permissions to the malicious apps, which later enable them to perform various nefarious activities, including intercepting texts or calls and stealing banking credentials. "Avast Threat Labs" also discovered that many pre-installed apps on several hundred new Android devices contain dangerous malware and adware. Some of the preinstalled malware can commit ad fraud or even take over its host device.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,223 |
In 2020, the Which? watchdog reported that more than a billion Android devices released in 2012 or earlier, which was 40% of Android devices worldwide, were at risk of being hacked. This conclusion stemmed from the fact that no security updates were issued for the Android versions below 7.0 in 2019. Which? collaborated with the AV Comparatives anti-virus lab to infect five phone models with malware, and it succeeded in each case. Google refused to comment on the watchdog's speculations. On August 5, 2020, Twitter published a blog urging its users to update their applications to the latest version with regards to a security concern that allowed others to access direct messages. A hacker could easily use the "Android system permissions" to fetch the account credentials in order to do so. The security issue is only with Android 8 (Android Oreo) and Android 9 (Android Pie). Twitter confirmed that updating the app will restrict such practices. Android applications run in a sandbox, an isolated area of the system that does not have access to the rest of the system's resources, unless access permissions are explicitly granted by the user when the application is installed, however this may not be possible for pre-installed apps. It is not possible, for example, to turn off the microphone access of the pre-installed camera app without disabling the camera completely. This is valid also in Android versions 7 and 8. Since February 2012, Google has used its Google Bouncer malware scanner to watch over and scan apps available in the Google Play store. A "Verify Apps" feature was introduced in November 2012, as part of the Android 4.2 "Jelly Bean" operating system version, to scan all apps, both from Google Play and from third-party sources, for malicious behaviour. Originally only doing so during installation, Verify Apps received an update in 2014 to "constantly" scan apps, and in 2017 the feature was made visible to users through a menu in Settings. Before installing an application, the Google Play store displays a list of the requirements an app needs to function. After reviewing these permissions, the user can choose to accept or refuse them, installing the application only if they accept. In Android 6.0 "Marshmallow", the permissions system was changed; apps are no longer automatically granted all of their specified permissions at installation time. An opt-in system is used instead, in which users are prompted to grant or deny individual permissions to an app when they are needed for the first time. Applications remember the grants, which can be revoked by the user at any time. Pre-installed apps, however, are not always part of this approach. In some cases it may not be possible to deny certain permissions to pre-installed apps, nor be possible to disable them. The Google Play Services app cannot be uninstalled, nor disabled. Any force stop attempt, result in the app restarting itself. The new permissions model is used only by applications developed for Marshmallow using its software development kit (SDK), and older apps will continue to use the previous all-or-nothing approach. Permissions can still be revoked for those apps, though this might prevent them from working properly, and a warning is displayed to that effect.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,228 |
In September 2014, Jason Nova of "Android Authority" reported on a study by the German security company Fraunhofer AISEC in antivirus software and malware threats on Android. Nova wrote that "The Android operating system deals with software packages by sandboxing them; this does not allow applications to list the directory contents of other apps to keep the system safe. By not allowing the antivirus to list the directories of other apps after installation, applications that show no inherent suspicious behavior when downloaded are cleared as safe. If then later on parts of the app are activated that turn out to be malicious, the antivirus will have no way to know since it is inside the app and out of the antivirus' jurisdiction". The study by Fraunhofer AISEC, examining antivirus software from Avast, AVG, Bitdefender, ESET, F-Secure, Kaspersky, Lookout, McAfee (formerly Intel Security), Norton, Sophos, and Trend Micro, revealed that "the tested antivirus apps do not provide protection against customized malware or targeted attacks", and that "the tested antivirus apps were also not able to detect malware which is completely unknown to date but does not make any efforts to hide its malignity". In August 2013, Google announced Android Device Manager (renamed Find My Device in May 2017), a service that allows users to remotely track, locate, and wipe their Android device, with an Android app for the service released in December. In December 2016, Google introduced a Trusted Contacts app, letting users request location-tracking of loved ones during emergencies. In 2020, Trusted Contacts was shut down and the location-sharing feature rolled into Google Maps. On October 8, 2018, Google announced new Google Play store requirements to combat over-sharing of potentially sensitive information, including call and text logs. The issue stems from the fact that many apps request permissions to access users' personal information (even if this information is not needed for the app to function) and some users unquestionably grant these permissions. Alternatively, a permission might be listed in the app manifest as required (as opposed to optional) and the app would not install unless user grants the permission; users can withdraw any, even required, permissions from any app in the device settings after app installation, but few users do this. Google promised to work with developers and create exceptions if their apps require Phone or SMS permissions for "core app functionality". The new policies enforcement started on January 6, 2019, 90 days after policy announcement on October 8, 2018. Furthermore, Google announced a new "target API level requirement" (codice_7 in manifest) at least Android 8.0 (API level 26) for all new apps and app updates. The API level requirement might combat the practice of app developers bypassing some permission screens by specifying early Android versions that had a coarser permission model.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,231 |
Dependence on proprietary Google Play Services and customizations added on top of the operating system by vendors who license Android from Google is causing privacy concerns. The source code for Android is open-source: it is developed in private by Google, with the source code released publicly when a new version of Android is released. Google publishes most of the code (including network and telephony stacks) under the non-copyleft Apache License version 2.0. which allows modification and redistribution. The license does not grant rights to the "Android" trademark, so device manufacturers and wireless carriers have to license it from Google under individual contracts. Associated Linux kernel changes are released under the copyleft GNU General Public License version 2, developed by the Open Handset Alliance, with the source code publicly available at all times. The only Android release which was not immediately made available as source code was the tablet-only 3.0 "Honeycomb" release. The reason, according to Andy Rubin in an official Android blog post, was because "Honeycomb" was rushed for production of the Motorola Xoom, and they did not want third parties creating a "really bad user experience" by attempting to put onto smartphones a version of Android intended for tablets. Only the base Android operating system (including some applications) is open-source software, whereas most Android devices ship with a substantial amount of proprietary software, such as Google Mobile Services, which includes applications such as Google Play Store, Google Search, and Google Play Services a software layer that provides APIs for the integration with Google-provided services, among others. These applications must be licensed from Google by device makers, and can only be shipped on devices which meet its compatibility guidelines and other requirements. Custom, certified distributions of Android produced by manufacturers (such as Samsung Experience) may also replace certain stock Android apps with their own proprietary variants and add additional software not included in the stock Android operating system. With the advent of the Google Pixel line of devices, Google itself has also made specific Android features timed or permanent exclusives to the Pixel series. There may also be "binary blob" drivers required for certain hardware components in the device. The best known fully open source Android services are the LineageOS distribution and MicroG which acts as an open source replacement of Google Play Services. Richard Stallman and the Free Software Foundation have been critical of Android and have recommended the usage of alternatives such as Replicant, because drivers and firmware vital for the proper functioning of Android devices are usually proprietary, and because the Google Play Store application can forcibly install or uninstall applications and, as a result, invite non-free software. In both cases, the use of closed-source software causes the system to become vulnerable to backdoors.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,235 |
It has been argued that because developers are often required to purchase the Google-branded Android license, this has turned the theoretically open system into a freemium service. Google licenses their Google Mobile Services software, along with the Android trademarks, only to hardware manufacturers for devices that meet Google's compatibility standards specified in the Android Compatibility Program document. Thus, forks of Android that make major changes to the operating system itself do not include any of Google's non-free components, stay incompatible with applications that require them, and must ship with an alternative software marketplace in lieu of Google Play Store. A prominent example of such an Android fork is Amazon's Fire OS, which is used on the Kindle Fire line of tablets, and oriented toward Amazon services. The shipment of Android devices without GMS is also common in mainland China, as Google does not do business there. In 2014, Google also began to require that all Android devices which license the Google Mobile Services software display a prominent "Powered by Android" logo on their boot screens. Google has also enforced preferential bundling and placement of Google Mobile Services on devices, including mandated bundling of the entire main suite of Google applications, mandatory placement of shortcuts to Google Search and the Play Store app on or near the main home screen page in its default configuration, and granting a larger share of search revenue to OEMs who agree to not include third-party app stores on their devices. In March 2018, it was reported that Google had begun to block "uncertified" Android devices from using Google Mobile Services software, and display a warning indicating that "the device manufacturer has preloaded Google apps and services without certification from Google". Users of custom ROMs can register their device ID to their Google account to remove this block. Some stock applications and components in AOSP code that were formerly used by earlier versions of Android, such as Search, Music, Calendar, and the location API, were abandoned by Google in favor of non-free replacements distributed through Play Store (Google Search, Google Play Music, and Google Calendar) and Google Play Services, which are no longer open-source. Moreover, open-source variants of some applications also exclude functions that are present in their non-free versions. These measures are likely intended to discourage forks and encourage commercial licensing in line with Google requirements, as the majority of the operating system's core functionality is dependent on proprietary components licensed exclusively by Google, and it would take significant development resources to develop an alternative suite of software and APIs to replicate or replace them. Apps that do not use Google components would also be at a functional disadvantage, as they can only use APIs contained within the OS itself. In turn, third-party apps may have dependencies on Google Play Services.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,239 |
Members of the Open Handset Alliance, which include the majority of Android OEMs, are also contractually forbidden from producing Android devices based on forks of the OS; in 2012, Acer Inc. was forced by Google to halt production on a device powered by Alibaba Group's Aliyun OS with threats of removal from the OHA, as Google deemed the platform to be an incompatible version of Android. Alibaba Group defended the allegations, arguing that the OS was a distinct platform from Android (primarily using HTML5 apps), but incorporated portions of Android's platform to allow backwards compatibility with third-party Android software. Indeed, the devices did ship with an application store which offered Android apps; however, the majority of them were pirated. Android received a lukewarm reaction when it was unveiled in 2007. Although analysts were impressed with the respected technology companies that had partnered with Google to form the Open Handset Alliance, it was unclear whether mobile phone manufacturers would be willing to replace their existing operating systems with Android. The idea of an open-source, Linux-based development platform sparked interest, but there were additional worries about Android facing strong competition from established players in the smartphone market, such as Nokia and Microsoft, and rival Linux mobile operating systems that were in development. These established players were skeptical: Nokia was quoted as saying "we don't see this as a threat", and a member of Microsoft's Windows Mobile team stated "I don't understand the impact that they are going to have." Since then Android has grown to become the most widely used smartphone operating system and "one of the fastest mobile experiences available". Reviewers have highlighted the open-source nature of the operating system as one of its defining strengths, allowing companies such as Nokia (Nokia X family), Amazon (Kindle Fire), Barnes & Noble (Nook), Ouya, Baidu and others to fork the software and release hardware running their own customised version of Android. As a result, it has been described by technology website "Ars Technica" as "practically the default operating system for launching new hardware" for companies without their own mobile platforms. This openness and flexibility is also present at the level of the end user: Android allows extensive customisation of devices by their owners and apps are freely available from non-Google app stores and third party websites. These have been cited as among the main advantages of Android phones over others. Despite Android's popularity, including an activation rate three times that of iOS, there have been reports that Google has not been able to leverage their other products and web services successfully to turn Android into the money maker that analysts had expected. "The Verge" suggested that Google is losing control of Android due to the extensive customization and proliferation of non-Google apps and services Amazon's Kindle Fire line uses Fire OS, a heavily modified fork of Android which does not include or support any of Google's proprietary components, and requires that users obtain software from its competing Amazon Appstore instead of Play Store. In 2014, in an effort to improve prominence of the Android brand, Google began to require that devices featuring its proprietary components display an Android logo on the boot screen.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,243 |
Android has suffered from "fragmentation", a situation where the variety of Android devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently across the ecosystem harder than rival platforms such as iOS where hardware and software varies less. For example, according to data from OpenSignal in July 2013, there were 11,868 models of Android devices, numerous screen sizes and eight Android OS versions simultaneously in use, while the large majority of iOS users have upgraded to the latest iteration of that OS. Critics such as "Apple Insider" have asserted that fragmentation via hardware and software pushed Android's growth through large volumes of low end, budget-priced devices running older versions of Android. They maintain this forces Android developers to write for the "lowest common denominator" to reach as many users as possible, who have too little incentive to make use of the latest hardware or software features only available on a smaller percentage of devices. However, OpenSignal, who develops both Android and iOS apps, concluded that although fragmentation can make development trickier, Android's wider global reach also increases the potential reward. Android is the most used operating system on phones in virtually all countries, with some countries, such as India, having over 96% market share. On tablets, usage is more even, as iOS is a bit more popular globally. Research company Canalys estimated in the second quarter of 2009, that Android had a 2.8% share of worldwide smartphone shipments. By May 2010, Android had a 10% worldwide smartphone market share, overtaking Windows Mobile, whilst in the US Android held a 28% share, overtaking iPhone OS. By the fourth quarter of 2010, its worldwide share had grown to 33% of the market becoming the top-selling smartphone platform, overtaking Symbian. In the US it became the top-selling platform in April 2011, overtaking BlackBerry OS with a 31.2% smartphone share, according to "comScore". By the third quarter of 2011, Gartner estimated that more than half (52.5%) of the smartphone sales belonged to Android. By the third quarter of 2012 Android had a 75% share of the global smartphone market according to the research firm IDC. In July 2011, Google said that 550,000 Android devices were being activated every day, up from 400,000 per day in May, and more than 100 million devices had been activated<ref name="i/o 2011 stats"></ref> with 4.4% growth per week. In September 2012, 500 million devices had been activated with 1.3 million activations per day. In May 2013, at Google I/O, Sundar Pichai announced that 900 million Android devices had been activated.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,248 |
Android market share varies by location. In July 2012, "mobile subscribers aged 13+" in the United States using Android were up to 52%, and rose to 90% in China. During the third quarter of 2012, Android's worldwide smartphone shipment market share was 75%, with 750 million devices activated in total. In April 2013, Android had 1.5 million activations per day. 48 billion application ("app") installation have been performed from the Google Play store, and by September 2013, one billion Android devices had been activated. Android devices account for more than half of smartphone sales in most markets, including the US, while "only in Japan was Apple on top" (September–November 2013 numbers). At the end of 2013, over 1.5 billion Android smartphones had been sold in the four years since 2010, making Android the most sold phone and tablet OS. Three billion Android smartphones were estimated to be sold by the end of 2014 (including previous years). According to Gartner research company, Android-based devices outsold all contenders, every year since 2012. In 2013, it outsold Windows 2.8:1 or by 573 million. Android has the largest installed base of all operating systems; Since 2013, devices running it also sell more than Windows, iOS and Mac OS X devices combined. According to StatCounter, which tracks only the use for browsing the web, Android is the most popular mobile operating system since August 2013. Android is the most popular operating system for web browsing in India and several other countries (e.g. virtually all of Asia, with Japan and North Korea exceptions). According to StatCounter, Android is most used on mobile in all African countries, and it stated "mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia", with virtually all countries in Africa having done so already (except for seven countries, including Egypt), such as Ethiopia and Kenya in which mobile (including tablets) usage is at 90.46% (Android only, accounts for 75.81% of all use there). While Android phones in the Western world almost always include Google's proprietary code (such as Google Play) in the otherwise open-source operating system, Google's proprietary code and trademark is increasingly not used in emerging markets; "The growth of AOSP Android devices goes way beyond just China [..] ABI Research claims that 65 million devices shipped globally with open-source Android in the second quarter of [2014], up from 54 million in the first quarter"; depending on country, percent of phones estimated to be based only on AOSP source code, forgoing the Android trademark: Thailand (44%), Philippines (38%), Indonesia (31%), India (21%), Malaysia (24%), Mexico (18%), Brazil (9%).
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,252 |
According to a January 2015 Gartner report, "Android surpassed a billion shipments of devices in 2014, and will continue to grow at a double-digit pace in 2015, with a 26 percent increase year over year." This made it the first time that any general-purpose operating system has reached more than one billion end users within a year: by reaching close to 1.16 billion end users in 2014, Android shipped over four times more than iOS and OS X combined, and over three times more than Microsoft Windows. Gartner expected the whole mobile phone market to "reach two billion units in 2016", including Android. Describing the statistics, Farhad Manjoo wrote in "The New York Times" that "About one of every two computers sold today is running Android. [It] has become Earth's dominant computing platform." According to a Statistica's estimate, Android smartphones had an installed base of 1.8 billion units in 2015, which was 76% of the estimated total number of smartphones worldwide. Android has the largest installed base of any mobile operating system and, since 2013, the highest-selling operating system overall with sales in 2012, 2013 and 2014 close to the installed base of all PCs. In the second quarter of 2014, Android's share of the global smartphone shipment market was 84.7%, a new record. This had grown to 87.5% worldwide market share by the third quarter of 2016, leaving main competitor iOS with 12.1% market share. According to an April 2017 StatCounter report, Android overtook Microsoft Windows to become the most popular operating system for total Internet usage. It has maintained the plurality since then. In September 2015, Google announced that Android had 1.4 billion monthly active users. This changed to 2 billion monthly active users in May 2017. Despite its success on smartphones, initially Android tablet adoption was slow, then later caught up with the iPad, in most countries. One of the main causes was the chicken or the egg situation where consumers were hesitant to buy an Android tablet due to a lack of high quality tablet applications, but developers were hesitant to spend time and resources developing tablet applications until there was a significant market for them. The content and app "ecosystem" proved more important than hardware specs as the selling point for tablets. Due to the lack of Android tablet-specific applications in 2011, early Android tablets had to make do with existing smartphone applications that were ill-suited to larger screen sizes, whereas the dominance of Apple's iPad was reinforced by the large number of tablet-specific iOS applications.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,258 |
Despite app support in its infancy, a considerable number of Android tablets, like the Barnes & Noble Nook (alongside those using other operating systems, such as the HP TouchPad and BlackBerry PlayBook) were rushed out to market in an attempt to capitalize on the success of the iPad. "InfoWorld" has suggested that some Android manufacturers initially treated their first tablets as a "Frankenphone business", a short-term low-investment opportunity by placing a smartphone-optimized Android OS (before Android 3.0 "Honeycomb" for tablets was available) on a device while neglecting user interface. This approach, such as with the Dell Streak, failed to gain market traction with consumers as well as damaging the early reputation of Android tablets. Furthermore, several Android tablets such as the Motorola Xoom were priced the same or higher than the iPad, which hurt sales. An exception was the Amazon Kindle Fire, which relied upon lower pricing as well as access to Amazon's ecosystem of applications and content. This began to change in 2012, with the release of the affordable Nexus 7 and a push by Google for developers to write better tablet applications. According to International Data Corporation, shipments of Android-powered tablets surpassed iPads in Q3 2012. As of the end of 2013, over 191.6 million Android tablets had sold in three years since 2011. This made Android tablets the most-sold type of tablet in 2013, surpassing iPads in the second quarter of 2013. According to StatCounter's web use statistics, , Android tablets represent the majority of tablet devices used in Africa (70%), South America (65%), while less than half elsewhere, e.g. Europe (44%), Asia (44%), North America (34%) and Oceania/Australia (18%). There are countries on all continents where Android tablets are the majority, for example, Mexico. In March 2016, Galen Gruman of "InfoWorld" stated that Android devices could be a "real part of your business [..] there's no longer a reason to keep Android at arm's length. It can now be as integral to your mobile portfolio as Apple's iOS devices are". A year earlier, Gruman had stated that Microsoft's own mobile Office apps were "better on iOS and Android" than on Microsoft's own Windows 10 devices. The recently released Android 12 is the most popular Android version on both smartphones and tablets. , Android 12 is most popular on smartphones at 30%. Usage of Android 10 and newer, i.e. supported versions, is at 75%, the rest of users are not supported with security updates. Android 12 is most popular in a few countries including the United States, but Android 11 is most used in most countries, including India, while in many others, including China, Android 10 is the most popular version.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,265 |
On tablets, Android 12 is most popular at 19% Android 11 is 2nd almost even with it, and it overtook Android 9.0 Pie in July 2021, which is now third at 17% (topped out at over 20%). Usage of Android 10 and newer, i.e. supported versions, is at 43% on Android tablets, with Pie 9.O, until recently supported, at 60%. The usage share varies a lot by country: e.g. Android 9.0 Pie is the version with the greatest usage share in the United States (also in the UK) at 34%, while Android 11 is also very popular e.g. most used in India, Canada, Australia, and most European countries, and others all over the world; Oreo 8.1 most used in China. , 66% of devices have Vulkan support (47% on newer Vulkan 1.1), the successor to OpenGL. At the same time 91.5% of the devices have support for or higher (in addition, the rest of devices, 8.50%, use version 2.0), with 73.50% using the latest version . In general, paid Android applications can easily be pirated. In a May 2012 interview with Eurogamer, the developers of "Football Manager" stated that the ratio of pirated players vs legitimate players was 9:1 for their game "Football Manager Handheld". However, not every developer agreed that piracy rates were an issue; for example, in July 2012 the developers of the game "Wind-up Knight" said that piracy levels of their game were only 12%, and most of the piracy came from China, where people cannot purchase apps from Google Play. In 2010, Google released a tool for validating authorized purchases for use within apps, but developers complained that this was insufficient and trivial to crack. Google responded that the tool, especially its initial release, was intended as a sample framework for developers to modify and build upon depending on their needs, not as a finished piracy solution. Android "Jelly Bean" introduced the ability for paid applications to be encrypted, so that they may work only on the device for which they were purchased. The success of Android has made it a target for patent and copyright litigation between technology companies, both Android and Android phone manufacturers having been involved in numerous patent lawsuits and other legal challenges. On August 12, 2010, Oracle sued Google over claimed infringement of copyrights and patents related to the Java programming language. Oracle originally sought damages up to $6.1 billion, but this valuation was rejected by a United States federal judge who asked Oracle to revise the estimate. In response, Google submitted multiple lines of defense, counterclaiming that Android did not infringe on Oracle's patents or copyright, that Oracle's patents were invalid, and several other defenses. They said that Android's Java runtime environment is based on Apache Harmony, a clean room implementation of the Java class libraries, and an independently developed virtual machine called Dalvik. In May 2012, the jury in this case found that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable. The parties agreed to zero dollars in statutory damages for a small amount of copied code. On May 9, 2014, the Federal Circuit partially reversed the district court ruling, ruling in Oracle's favor on the copyrightability issue, and remanding the issue of fair use to the district court.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,271 |
In December 2015, Google announced that the next major release of Android (Android Nougat) would switch to OpenJDK, which is the official open-source implementation of the Java platform, instead of using the now-discontinued Apache Harmony project as its runtime. Code reflecting this change was also posted to the AOSP source repository. In its announcement, Google claimed this was part of an effort to create a "common code base" between Java on Android and other platforms. Google later admitted in a court filing that this was part of an effort to address the disputes with Oracle, as its use of OpenJDK code is governed under the GNU General Public License (GPL) with a linking exception, and that "any damages claim associated with the new versions expressly licensed by Oracle under OpenJDK would require a separate analysis of damages from earlier releases". In June 2016, a United States federal court ruled in favor of Google, stating that its use of the APIs was fair use. In April 2021, the United Supreme Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing. The majority opinion began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the factors that contributed to fair use. In 2013, FairSearch, a lobbying organization supported by Microsoft, Oracle and others, filed a complaint regarding Android with the European Commission, alleging that its free-of-charge distribution model constituted anti-competitive predatory pricing. The Free Software Foundation Europe, whose donors include Google, disputed the Fairsearch allegations. On April 20, 2016, the EU filed a formal antitrust complaint against Google based upon the FairSearch allegations, arguing that its leverage over Android vendors, including the mandatory bundling of the entire suite of proprietary Google software, hindering the ability for competing search providers to be integrated into Android, and barring vendors from producing devices running forks of Android, constituted anti-competitive practices. In August 2016, Google was fined US$6.75 million by the Russian Federal Antimonopoly Service (FAS) under similar allegations by Yandex. The European Commission issued its decision on July 18, 2018, determining that Google had conducted three operations related to Android that were in violation of antitrust regulations: bundling Google's search and Chrome as part of Android, blocking phone manufacturers from using forked versions of Android, and establishing deals with phone manufacturers and network providers to exclusively bundle the Google search application on handsets (a practice Google ended by 2014). The EU fined Google for (about ) and required the company to end this conduct within 90 days. Google filed its appeal of the ruling in October 2018, though will not ask for any interim measures to delay the onset of conduct requirements.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,274 |
On October 16, 2018, Google announced that it would change its distribution model for Google Mobile Services in the EU, since part of its revenues streams for Android which came through use of Google Search and Chrome were now prohibited by the EU's ruling. While the core Android system remains free, OEMs in Europe would be required to purchase a paid license to the core suite of Google applications, such as Gmail, Google Maps and the Google Play Store. Google Search will be licensed separately, with an option to include Google Chrome at no additional cost atop Search. European OEMs can bundle third-party alternatives on phones and devices sold to customers, if they so choose. OEMs will no longer be barred from selling any device running incompatible versions of Android in Europe. In addition to lawsuits against Google directly, various proxy wars have been waged against Android indirectly by targeting manufacturers of Android devices, with the effect of discouraging manufacturers from adopting the platform by increasing the costs of bringing an Android device to market. Both Apple and Microsoft have sued several manufacturers for patent infringement, with Apple's ongoing legal action against Samsung being a particularly high-profile case. In January 2012, Microsoft said they had signed patent license agreements with eleven Android device manufacturers, whose products account for "70 percent of all Android smartphones" sold in the US and 55% of the worldwide revenue for Android devices. These include Samsung and HTC. Samsung's patent settlement with Microsoft included an agreement to allocate more resources to developing and marketing phones running Microsoft's Windows Phone operating system. Microsoft has also tied its own Android software to patent licenses, requiring the bundling of Microsoft Office Mobile and Skype applications on Android devices to subsidize the licensing fees, while at the same time helping to promote its software lines. Google has publicly expressed its frustration for the current patent landscape in the United States, accusing Apple, Oracle and Microsoft of trying to take down Android through patent litigation, rather than innovating and competing with better products and services. In August 2011, Google purchased Motorola Mobility for US$12.5 billion, which was viewed in part as a defensive measure to protect Android, since Motorola Mobility held more than 17,000 patents. In December 2011, Google bought over a thousand patents from IBM. Turkey's competition authority investigations about default search engine in Android, started in 2017, led to a US$17.4 million fine in September 2018 and a fine of 0.05 percent of Google's revenue per day in November 2019 when Google didn't meet the requirements. In December 2019, Google stopped issuing licenses for new Android phone models sold in Turkey.
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,278 |
Google has developed several variations of Android for specific use cases, including Android Wear, later renamed Wear OS, for wearable devices such as wrist watches, Android TV for televisions, Android Things for smart or Internet of things devices and Android Automotive for cars. Additionally, by providing infrastructure that combines dedicated hardware and dedicated applications running on regular Android, Google have opened up the platform for its use in particular usage scenarios, such as the Android Auto app for cars, and Daydream, a Virtual Reality platform. The open and customizable nature of Android allows device makers to use it on other electronics as well, including laptops, netbooks, and desktop computers, cameras, headphones, home automation systems, game consoles, media players, satellites, routers, printers, payment terminals, automated teller machines, and robots. Additionally, Android has been installed and run on a variety of less-technical objects, including calculators, single-board computers, feature phones, electronic dictionaries, alarm clocks, refrigerators, landline telephones, coffee machines, bicycles, and mirrors. Ouya, a video game console running Android, became one of the most successful Kickstarter campaigns, crowdfunding US$8.5m for its development, and was later followed by other Android-based consoles, such as Nvidia's Shield Portable an Android device in a video game controller form factor. In 2011, Google demonstrated "Android@Home", a home automation technology which uses Android to control a range of household devices including light switches, power sockets and thermostats. Prototype light bulbs were announced that could be controlled from an Android phone or tablet, but Android head Andy Rubin was cautious to note that "turning a lightbulb on and off is nothing new", pointing to numerous failed home automation services. Google, he said, was thinking more ambitiously and the intention was to use their position as a cloud services provider to bring Google products into customers' homes. Parrot unveiled an Android-based car stereo system known as Asteroid in 2011, followed by a successor, the touchscreen-based Asteroid Smart, in 2012. In 2013, Clarion released its own Android-based car stereo, the AX1. In January 2014, at the Consumer Electronics Show (CES), Google announced the formation of the Open Automotive Alliance, a group including several major automobile makers (Audi, General Motors, Hyundai, and Honda) and Nvidia, which aims to produce Android-based in-car entertainment systems for automobiles, "[bringing] the best of Android into the automobile in a safe and seamless way."
|
Android (operating system)
|
https://en.wikipedia.org/wiki?curid=12610483
| 8,283 |
Android comes preinstalled on a few laptops (a similar functionality of running Android applications is also available in Google's ChromeOS) and can also be installed on personal computers by end users. On those platforms Android provides additional functionality for physical keyboards and mice, together with the "Alt-Tab" key combination for switching applications quickly with a keyboard. In December 2014, one reviewer commented that Android's notification system is "vastly more complete and robust than in most environments" and that Android is "absolutely usable" as one's primary desktop operating system. In October 2015, "The Wall Street Journal" reported that Android will serve as Google's future main laptop operating system, with the plan to fold ChromeOS into it by 2017. Google's Sundar Pichai, who led the development of Android, explained that "mobile as a computing paradigm is eventually going to blend with what we think of as desktop today." Also, back in 2009, Google co-founder Sergey Brin himself said that ChromeOS and Android would "likely converge over time." Lockheimer, who replaced Pichai as head of Android and ChromeOS, responded to this claim with an official Google blog post stating that "While we've been working on ways to bring together the best of both operating systems, there's no plan to phase out ChromeOS [which has] guaranteed auto-updates for five years". That is unlike Android where support is shorter with "EOL dates [being..] at least 3 years [into the future] for Android tablets for education". At Google I/O in May 2016, Google announced Daydream, a virtual reality platform that relies on a smartphone and provides VR capabilities through a virtual reality headset and controller designed by Google itself. The platform is built into Android starting with Android Nougat, differentiating from standalone support for VR capabilities. The software is available for developers, and was released in 2016. The mascot of Android is a green android robot, as related to the software's name. Although it has no official name, the Android team at Google reportedly call it "Bugdroid". It was designed by then-Google graphic designer Irina Blok on November 5, 2007, when Android was announced. Contrary to reports that she was tasked with a project to create an icon, Blok confirmed in an interview that she independently developed it and made it open source. The robot design was initially not presented to Google, but it quickly became commonplace in the Android development team, with various variations of it created by the developers there who liked the figure, as it was free under a Creative Commons license. Its popularity amongst the development team eventually led to Google adopting it as an official icon as part of the Android logo when it launched to consumers in 2008.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,350 |
Albert Einstein ( ; ; 14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time. Einstein is best known for developing the theory of relativity, but he also made important contributions to the development of the theory of quantum mechanics. Relativity and quantum mechanics are the two pillars of modern physics. His mass–energy equivalence formula , which arises from relativity theory, has been dubbed "the world's most famous equation". His work is also known for its influence on the philosophy of science. He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect", a pivotal step in the development of quantum theory. His intellectual achievements and originality resulted in "Einstein" becoming synonymous with "genius". Einsteinium, one of the synthetic elements in the Periodic Table was named in his honor. In 1905, a year sometimes described as his "annus mirabilis" ('miracle year'), Einstein published four groundbreaking papers. These outlined the theory of the photoelectric effect, explained Brownian motion, introduced special relativity, and demonstrated mass-energy equivalence. Einstein thought that the laws of classical mechanics could no longer be reconciled with those of the electromagnetic field, which led him to develop his special theory of relativity. He then extended the theory to gravitational fields; he published a paper on general relativity in 1916, introducing his theory of gravitation. In 1917, he applied the general theory of relativity to model the structure of the universe. He continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the thermal properties of light and the quantum theory of radiation, which laid the foundation of the photon theory of light. However, for much of the later part of his career, he worked on two ultimately unsuccessful endeavors. First, despite his great contributions to quantum mechanics, he opposed what it evolved into, objecting that "God does not play dice". Second, he attempted to devise a unified field theory by generalizing his geometric theory of gravitation to include electromagnetism. As a result, he became increasingly isolated from the mainstream of modern physics. Einstein was born in the German Empire, but moved to Switzerland in 1895, forsaking his German citizenship (as a subject of the Kingdom of Württemberg) the following year. In 1897, at the age of 17, he enrolled in the mathematics and physics teaching diploma program at the Swiss Federal polytechnic school in Zürich, graduating in 1900. In 1901, he acquired Swiss citizenship, which he kept for the rest of his life, and in 1903 he secured a permanent position at the Swiss Patent Office in Bern. In 1905, he was awarded a PhD by the University of Zurich. In 1914, Einstein moved to Berlin in order to join the Prussian Academy of Sciences and the Humboldt University of Berlin. In 1917, Einstein became director of the Kaiser Wilhelm Institute for Physics; he also became a German citizen again, this time Prussian.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,354 |
In 1933, while Einstein was visiting the United States, Adolf Hitler came to power in Germany. Einstein, as a Jew, objected to the policies of the newly elected Nazi government; he settled in the United States and became an American citizen in 1940. On the eve of World War II, he endorsed a letter to President Franklin D. Roosevelt alerting him to the potential German nuclear weapons program and recommending that the US begin similar research. Einstein supported the Allies but generally denounced the idea of nuclear weapons. Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879 into a family of secular Ashkenazi Jews. His parents were Hermann Einstein, a salesman and engineer, and Pauline Koch. In 1880, the family moved to Munich, where Einstein's father and his uncle Jakob founded "Elektrotechnische Fabrik J. Einstein & Cie", a company that manufactured electrical equipment based on direct current. Albert attended a Catholic elementary school in Munich, from the age of five, for three years. At the age of eight, he was transferred to the Luitpold-Gymnasium (now known as the Albert-Einstein-Gymnasium), where he received advanced primary and secondary school education until he left the German Empire seven years later. In 1894, Hermann and Jakob's company lost a bid to supply the city of Munich with electrical lighting because they lacked the capital to convert their equipment from the direct current (DC) standard to the more efficient alternating current (AC) standard. The loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan and a few months later to Pavia. When the family moved to Pavia, Einstein, then 15, stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to pursue electrical engineering, but Einstein clashed with the authorities and resented the school's regimen and teaching method. He later wrote that the spirit of learning and creative thought was lost in strict rote learning. At the end of December 1894, he traveled to Italy to join his family in Pavia, convincing the school to let him go by using a doctor's note. During his time in Italy he wrote a short essay with the title "On the Investigation of the State of the Ether in a Magnetic Field". Einstein excelled at math and physics from a young age, reaching a mathematical level years ahead of his peers. The 12-year-old Einstein taught himself algebra and Euclidean geometry over a single summer. Einstein also independently discovered his own original proof of the Pythagorean theorem aged 12. A family tutor Max Talmud says that after he had given the 12-year-old Einstein a geometry textbook, after a short time "[Einstein] had worked through the whole book. He thereupon devoted himself to higher mathematics ... Soon the flight of his mathematical genius was so high I could not follow." His passion for geometry and algebra led the 12-year-old to become convinced that nature could be understood as a "mathematical structure". Einstein started teaching himself calculus at 12, and as a 14-year-old he says he had "mastered integral and differential calculus".
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,359 |
At the age of 13, when he had become more seriously interested in philosophy (and music), Einstein was introduced to Kant's "Critique of Pure Reason". Kant became his favorite philosopher, his tutor stating: "At the time he was still a child, only thirteen years old, yet Kant's works, incomprehensible to ordinary mortals, seemed to be clear to him." In 1895, at the age of 16, Einstein took the entrance examinations for the Swiss Federal polytechnic school in Zürich (later the Eidgenössische Technische Hochschule, ETH). He failed to reach the required standard in the general part of the examination, but obtained exceptional grades in physics and mathematics. On the advice of the principal of the polytechnic school, he attended the Argovian cantonal school ("gymnasium") in Aarau, Switzerland, in 1895 and 1896 to complete his secondary schooling. While lodging with the family of Jost Winteler, he fell in love with Winteler's daughter, Marie. Albert's sister Maja later married Winteler's son Paul. In January 1896, with his father's approval, Einstein renounced his citizenship in the German Kingdom of Württemberg to avoid military service. In September 1896 he passed the Swiss "Matura" with mostly good grades, including a top grade of 6 in physics and mathematical subjects, on a scale of 1–6. At 17, he enrolled in the four-year mathematics and physics teaching diploma program at the Federal polytechnic school. Marie Winteler, who was a year older, moved to Olsberg, Switzerland, for a teaching post. Einstein's future wife, a 20-year-old Serbian named Mileva Marić, also enrolled at the polytechnic school that year. She was the only woman among the six students in the mathematics and physics section of the teaching diploma course. Over the next few years, Einstein's and Marić's friendship developed into a romance, and they spent countless hours debating and reading books together on extra-curricular physics in which they were both interested. Einstein wrote in his letters to Marić that he preferred studying alongside her. In 1900, Einstein passed the exams in Maths and Physics and was awarded a Federal teaching diploma. There is eyewitness evidence and several letters over many years that indicate Marić might have collaborated with Einstein prior to his landmark 1905 papers, known as the "Annus Mirabilis" papers, and that they developed some of the concepts together during their studies, although some historians of physics who have studied the issue disagree that she made any substantive contributions.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,362 |
Early correspondence between Einstein and Marić was discovered and published in 1987 which revealed that the couple had a daughter named "Lieserl", born in early 1902 in Novi Sad where Marić was staying with her parents. Marić returned to Switzerland without the child, whose real name and fate are unknown. The contents of Einstein's letter in September 1903 suggest that the girl was either given up for adoption or died of scarlet fever in infancy. Einstein and Marić married in January 1903. In May 1904, their son Hans Albert Einstein was born in Bern, Switzerland. Their son Eduard was born in Zürich in July 1910. The couple moved to Berlin in April 1914, but Marić returned to Zürich with their sons after learning that, despite their close relationship before, Einstein's chief romantic attraction was now his cousin Elsa Löwenthal; she was his first cousin maternally and second cousin paternally. Einstein and Marić divorced on 14 February 1919, having lived apart for five years. As part of the divorce settlement, Einstein agreed to give Marić any future (in the event, 1921) Nobel Prize money. In letters revealed in 2015, Einstein wrote to his early love Marie Winteler about his marriage and his strong feelings for her. He wrote in 1910, while his wife was pregnant with their second child: "I think of you in heartfelt love every spare minute and am so unhappy as only a man can be." He spoke about a "misguided love" and a "missed life" regarding his love for Marie. Einstein married Löwenthal in 1919, after having had a relationship with her since 1912. They emigrated to the United States in 1933. Elsa was diagnosed with heart and kidney problems in 1935 and died in December 1936. In 1923, Einstein fell in love with a secretary named Betty Neumann, the niece of a close friend, Hans Mühsam. In a volume of letters released by Hebrew University of Jerusalem in 2006, Einstein described about six women, including Margarete Lebach (a blonde Austrian), Estella Katzenellenbogen (the rich owner of a florist business), Toni Mendel (a wealthy Jewish widow) and Ethel Michanowski (a Berlin socialite), with whom he spent time and from whom he received gifts while being married to Elsa. Later, after the death of his second wife Elsa, Einstein was briefly in a relationship with Margarita Konenkova. Konenkova was a Russian spy who was married to the Russian sculptor Sergei Konenkov (who created the bronze bust of Einstein at the Institute for Advanced Study at Princeton).
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,367 |
Einstein's son Eduard had a breakdown at about age 20 and was diagnosed with schizophrenia. His mother cared for him and he was also committed to asylums for several periods, finally, after her death, being committed permanently to Burghölzli, the Psychiatric University Hospital in Zürich. After graduating in 1900, Einstein spent almost two years searching for a teaching post. He acquired Swiss citizenship in February 1901, but was not conscripted for medical reasons. With the help of Marcel Grossmann's father, he secured a job in Bern at the Swiss Patent Office, as an assistant examiner – level III. Einstein evaluated patent applications for a variety of devices including a gravel sorter and an electromechanical typewriter. In 1903, his position at the Swiss Patent Office became permanent, although he was passed over for promotion until he "fully mastered machine technology". Much of his work at the patent office related to questions about transmission of electric signals and electrical-mechanical synchronization of time, two technical problems that show up conspicuously in the thought experiments that eventually led Einstein to his radical conclusions about the nature of light and the fundamental connection between space and time. With a few friends he had met in Bern, Einstein started a small discussion group in 1902, self-mockingly named "The Olympia Academy", which met regularly to discuss science and philosophy. Sometimes they were joined by Mileva who attentively listened but did not participate. Their readings included the works of Henri Poincaré, Ernst Mach, and David Hume, which influenced his scientific and philosophical outlook. In 1900, Einstein's paper "Folgerungen aus den Capillaritätserscheinungen" ("Conclusions from the Capillarity Phenomena") was published in the journal "Annalen der Physik". On 30 April 1905 Einstein completed his dissertation, "A New Determination of Molecular Dimensions" with Alfred Kleiner, serving as "pro-forma" advisor. His thesis was accepted in July 1905, and Einstein was awarded a PhD on 15 January 1906. Also in 1905, which has been called Einstein's "annus mirabilis" (amazing year), he published four groundbreaking papers, on the photoelectric effect, Brownian motion, special relativity, and the equivalence of mass and energy, which were to bring him to the notice of the academic world, at the age of 26. By 1908, he was recognized as a leading scientist and was appointed lecturer at the University of Bern. The following year, after he gave a lecture on electrodynamics and the relativity principle at the University of Zurich, Alfred Kleiner recommended him to the faculty for a newly created professorship in theoretical physics. Einstein was appointed associate professor in 1909.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,375 |
Einstein became a full professor at the German Charles-Ferdinand University in Prague in April 1911, accepting Austrian citizenship in the Austro-Hungarian Empire to do so. During his Prague stay, he wrote 11 scientific works, five of them on radiation mathematics and on the quantum theory of solids. In July 1912, he returned to his alma mater in Zürich. From 1912 until 1914, he was a professor of theoretical physics at the ETH Zurich, where he taught analytical mechanics and thermodynamics. He also studied continuum mechanics, the molecular theory of heat, and the problem of gravitation, on which he worked with mathematician and friend Marcel Grossmann. When the "Manifesto of the Ninety-Three" was published in October 1914—a document signed by a host of prominent German intellectuals that justified Germany's militarism and position during the First World War—Einstein was one of the few German intellectuals to rebut its contents and sign the pacifistic "Manifesto to the Europeans". In the spring of 1913, Einstein was enticed to move to Berlin with an offer that included membership in the Prussian Academy of Sciences, and a linked University of Berlin professorship, enabling him to concentrate exclusively on research. On 3 July 1913, he became a member of the Prussian Academy of Sciences in Berlin. Max Planck and Walther Nernst visited him the next week in Zurich to persuade him to join the academy, additionally offering him the post of director at the Kaiser Wilhelm Institute for Physics, which was soon to be established. Membership in the academy included paid salary and professorship without teaching duties at Humboldt University of Berlin. He was officially elected to the academy on 24 July, and he moved to Berlin the following year. His decision to move to Berlin was also influenced by the prospect of living near his cousin Elsa, with whom he had started a romantic affair. Einstein assumed his position with the academy, and Berlin University, after moving into his Dahlem apartment on 1 April 1914. As World War I broke out that year, the plan for Kaiser Wilhelm Institute for Physics was delayed. The institute was established on 1 October 1917, with Einstein as its director. In 1916, Einstein was elected president of the German Physical Society (1916–1918). In 1911, Einstein used his 1907 Equivalence principle to calculate the deflection of light from another star by the Sun's gravity. In 1913, Einstein improved upon those calculations by using Riemannian space-time to represent the gravity field. By the fall of 1915, Einstein had successfully completed his general theory of relativity, which he used to calculate that deflection, and the perihelion precession of Mercury. In 1919, that deflection prediction was confirmed by Sir Arthur Eddington during the solar eclipse of 29 May 1919. Those observations were published in the international media, making Einstein world-famous. On 7 November 1919, the leading British newspaper "The Times" printed a banner headline that read: "Revolution in Science – New Theory of the Universe – Newtonian Ideas Overthrown".
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,380 |
In 1920, he became a Foreign Member of the Royal Netherlands Academy of Arts and Sciences. In 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". While the general theory of relativity was still considered somewhat controversial, the citation also does not treat even the cited photoelectric work as an "explanation" but merely as a "discovery of the law", as the idea of photons was considered outlandish and did not receive universal acceptance until the 1924 derivation of the Planck spectrum by S. N. Bose. Einstein was elected a Foreign Member of the Royal Society (ForMemRS) in 1921. He also received the Copley Medal from the Royal Society in 1925. Einstein resigned from the Prussian Academy in March 1933. Einstein's scientific accomplishments while in Berlin, included finishing the general theory of relativity, proving the gyromagnetic effect, contributing to the quantum theory of radiation, and Bose–Einstein statistics. Einstein visited New York City for the first time on 2 April 1921, where he received an official welcome by Mayor John Francis Hylan, followed by three weeks of lectures and receptions. He went on to deliver several lectures at Columbia University and Princeton University, and in Washington, he accompanied representatives of the National Academy of Sciences on a visit to the White House. On his return to Europe he was the guest of the British statesman and philosopher Viscount Haldane in London, where he met several renowned scientific, intellectual, and political figures, and delivered a lecture at King's College London. He also published an essay, "My First Impression of the U.S.A.", in July 1921, in which he tried briefly to describe some characteristics of Americans, much as had Alexis de Tocqueville, who published his own impressions in "Democracy in America" (1835). For some of his observations, Einstein was clearly surprised: "What strikes a visitor is the joyous, positive attitude to life ... The American is friendly, self-confident, optimistic, and without envy." In 1922, his travels took him to Asia and later to Palestine, as part of a six-month excursion and speaking tour, as he visited Singapore, Ceylon and Japan, where he gave a series of lectures to thousands of Japanese. After his first public lecture, he met the emperor and empress at the Imperial Palace, where thousands came to watch. In a letter to his sons, he described his impression of the Japanese as being modest, intelligent, considerate, and having a true feel for art. In his own travel diaries from his 1922–23 visit to Asia, he expresses some views on the Chinese, Japanese and Indian people, which have been described as xenophobic and racist judgments when they were rediscovered in 2018.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,385 |
Because of Einstein's travels to the Far East, he was unable to personally accept the Nobel Prize for Physics at the Stockholm award ceremony in December 1922. In his place, the banquet speech was made by a German diplomat, who praised Einstein not only as a scientist but also as an international peacemaker and activist. On his return voyage, he visited Palestine for 12 days, his only visit to that region. He was greeted as if he were a head of state, rather than a physicist, which included a cannon salute upon arriving at the home of the British high commissioner, Sir Herbert Samuel. During one reception, the building was stormed by people who wanted to see and hear him. In Einstein's talk to the audience, he expressed happiness that the Jewish people were beginning to be recognized as a force in the world. Einstein visited Spain for two weeks in 1923, where he briefly met Santiago Ramón y Cajal and also received a diploma from King Alfonso XIII naming him a member of the Spanish Academy of Sciences. From 1922 to 1932, Einstein was a member of the International Committee on Intellectual Cooperation of the League of Nations in Geneva (with a few months of interruption in 1923–1924), a body created to promote international exchange between scientists, researchers, teachers, artists, and intellectuals. Originally slated to serve as the Swiss delegate, Secretary-General Eric Drummond was persuaded by Catholic activists Oskar Halecki and Giuseppe Motta to instead have him become the German delegate, thus allowing Gonzague de Reynold to take the Swiss spot, from which he promoted traditionalist Catholic values. Einstein's former physics professor Hendrik Lorentz and the Polish chemist Marie Curie were also members of the committee. In the months of March and April 1925, Einstein visited South America, where he spent about a month in Argentina, a week in Uruguay, and a week in Rio de Janeiro, Brazil. Einstein's visit was initiated by Jorge Duclout (1856–1927) and Mauricio Nirenstein (1877–1935) with the support of several Argentine scholars, including Julio Rey Pastor, Jakob Laub, and Leopoldo Lugones. The visit by Einstein and his wife was financed primarily by the Council of the University of Buenos Aires and the "Asociación Hebraica Argentina" (Argentine Hebraic Association) with a smaller contribution from the Argentine-Germanic Cultural Institution. In December 1930, Einstein visited America for the second time, originally intended as a two-month working visit as a research fellow at the California Institute of Technology. After the national attention he received during his first trip to the US, he and his arrangers aimed to protect his privacy. Although swamped with telegrams and invitations to receive awards or speak publicly, he declined them all.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,391 |
After arriving in New York City, Einstein was taken to various places and events, including Chinatown, a lunch with the editors of "The New York Times", and a performance of "Carmen" at the Metropolitan Opera, where he was cheered by the audience on his arrival. During the days following, he was given the keys to the city by Mayor Jimmy Walker and met the president of Columbia University, who described Einstein as "the ruling monarch of the mind". Harry Emerson Fosdick, pastor at New York's Riverside Church, gave Einstein a tour of the church and showed him a full-size statue that the church made of Einstein, standing at the entrance. Also during his stay in New York, he joined a crowd of 15,000 people at Madison Square Garden during a Hanukkah celebration. Einstein next traveled to California, where he met Caltech president and Nobel laureate Robert A. Millikan. His friendship with Millikan was "awkward", as Millikan "had a penchant for patriotic militarism", where Einstein was a pronounced pacifist. During an address to Caltech's students, Einstein noted that science was often inclined to do more harm than good. This aversion to war also led Einstein to befriend author Upton Sinclair and film star Charlie Chaplin, both noted for their pacifism. Carl Laemmle, head of Universal Studios, gave Einstein a tour of his studio and introduced him to Chaplin. They had an instant rapport, with Chaplin inviting Einstein and his wife, Elsa, to his home for dinner. Chaplin said Einstein's outward persona, calm and gentle, seemed to conceal a "highly emotional temperament", from which came his "extraordinary intellectual energy". Chaplin's film, "City Lights", was to premiere a few days later in Hollywood, and Chaplin invited Einstein and Elsa to join him as his special guests. Walter Isaacson, Einstein's biographer, described this as "one of the most memorable scenes in the new era of celebrity". Chaplin visited Einstein at his home on a later trip to Berlin and recalled his "modest little flat" and the piano at which he had begun writing his theory. Chaplin speculated that it was "possibly used as kindling wood by the Nazis". In February 1933, while on a visit to the United States, Einstein knew he could not return to Germany with the rise to power of the Nazis under Germany's new chancellor, Adolf Hitler. While at American universities in early 1933, he undertook his third two-month visiting professorship at the California Institute of Technology in Pasadena. In February and March 1933, the Gestapo repeatedly raided his family's apartment in Berlin. He and his wife Elsa returned to Europe in March, and during the trip, they learned that the German Reichstag had passed the Enabling Act on 23 March, transforming Hitler's government into a "de facto" legal dictatorship, and that they would not be able to proceed to Berlin. Later on, they heard that their cottage had been raided by the Nazis and Einstein's personal sailboat confiscated. Upon landing in Antwerp, Belgium on 28 March, Einstein immediately went to the German consulate and surrendered his passport, formally renouncing his German citizenship. The Nazis later sold his boat and converted his cottage into a Hitler Youth camp.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,397 |
In April 1933, Einstein discovered that the new German government had passed laws barring Jews from holding any official positions, including teaching at universities. Historian Gerald Holton describes how, with "virtually no audible protest being raised by their colleagues", thousands of Jewish scientists were suddenly forced to give up their university positions and their names were removed from the rolls of institutions where they were employed. A month later, Einstein's works were among those targeted by the German Student Union in the Nazi book burnings, with Nazi propaganda minister Joseph Goebbels proclaiming, "Jewish intellectualism is dead." One German magazine included him in a list of enemies of the German regime with the phrase, "not yet hanged", offering a $5,000 bounty on his head. In a subsequent letter to physicist and friend Max Born, who had already emigrated from Germany to England, Einstein wrote, "... I must confess that the degree of their brutality and cowardice came as something of a surprise." After moving to the US, he described the book burnings as a "spontaneous emotional outburst" by those who "shun popular enlightenment", and "more than anything else in the world, fear the influence of men of intellectual independence". Einstein was now without a permanent home, unsure where he would live and work, and equally worried about the fate of countless other scientists still in Germany. Aided by the Academic Assistance Council, founded in April 1933 by British liberal politician William Beveridge to help academics escape Nazi persecution, Einstein was able to leave Germany. He rented a house in De Haan, Belgium, where he lived for a few months. In late July 1933, he went to England for about six weeks at the personal invitation of British naval officer Commander Oliver Locker-Lampson, who had become friends with Einstein in the preceding years. Locker-Lampson invited him to stay near his home in a wooden cabin on Roughton Heath in the Parish of . To protect Einstein, Locker-Lampson had two bodyguards watch over him at his secluded cabin; a photo of them carrying shotguns and guarding Einstein was published in the "Daily Herald" on 24 July 1933. Locker-Lampson took Einstein to meet Winston Churchill at his home, and later, Austen Chamberlain and former Prime Minister Lloyd George. Einstein asked them to help bring Jewish scientists out of Germany. British historian Martin Gilbert notes that Churchill responded immediately, and sent his friend, physicist Frederick Lindemann, to Germany to seek out Jewish scientists and place them in British universities. Churchill later observed that as a result of Germany having driven the Jews out, they had lowered their "technical standards" and put the Allies' technology ahead of theirs.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,401 |
Einstein later contacted leaders of other nations, including Turkey's Prime Minister, İsmet İnönü, to whom he wrote in September 1933 requesting placement of unemployed German-Jewish scientists. As a result of Einstein's letter, Jewish invitees to Turkey eventually totaled over "1,000 saved individuals". Locker-Lampson also submitted a bill to parliament to extend British citizenship to Einstein, during which period Einstein made a number of public appearances describing the crisis brewing in Europe. In one of his speeches he denounced Germany's treatment of Jews, while at the same time he introduced a bill promoting Jewish citizenship in Palestine, as they were being denied citizenship elsewhere. In his speech he described Einstein as a "citizen of the world" who should be offered a temporary shelter in the UK. Both bills failed, however, and Einstein then accepted an earlier offer from the Institute for Advanced Study, in Princeton, New Jersey, US, to become a resident scholar. On 3 October 1933, Einstein delivered a speech on the importance of academic freedom before a packed audience at the Royal Albert Hall in London, with "The Times" reporting he was wildly cheered throughout. Four days later he returned to the US and took up a position at the Institute for Advanced Study, noted for having become a refuge for scientists fleeing Nazi Germany. At the time, most American universities, including Harvard, Princeton and Yale, had minimal or no Jewish faculty or students, as a result of their Jewish quotas, which lasted until the late 1940s. Einstein was still undecided on his future. He had offers from several European universities, including Christ Church, Oxford, where he stayed for three short periods between May 1931 and June 1933 and was offered a five-year research fellowship (called a "studentship" at Christ Church), but in 1935, he arrived at the decision to remain permanently in the United States and apply for citizenship. Einstein's affiliation with the Institute for Advanced Study would last until his death in 1955. He was one of the four first selected (along with John von Neumann, Kurt Gödel, and Hermann Weyl) at the new Institute, where he soon developed a close friendship with Gödel. The two would take long walks together discussing their work. Bruria Kaufman, his assistant, later became a physicist. During this period, Einstein tried to develop a unified field theory and to refute the accepted interpretation of quantum physics, both unsuccessfully. In 1939, a group of Hungarian scientists that included émigré physicist Leó Szilárd attempted to alert Washington to ongoing Nazi atomic bomb research. The group's warnings were discounted. Einstein and Szilárd, along with other refugees such as Edward Teller and Eugene Wigner, "regarded it as their responsibility to alert Americans to the possibility that German scientists might win the race to build an atomic bomb, and to warn that Hitler would be more than willing to resort to such a weapon." To make certain the US was aware of the danger, in July 1939, a few months before the beginning of World War II in Europe, Szilárd and Wigner visited Einstein to explain the possibility of atomic bombs, which Einstein, a pacifist, said he had never considered. He was asked to lend his support by writing a letter, with Szilárd, to President Roosevelt, recommending the US pay attention and engage in its own nuclear weapons research.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,407 |
The letter is believed to be "arguably the key stimulus for the U.S. adoption of serious investigations into nuclear weapons on the eve of the U.S. entry into World War II". In addition to the letter, Einstein used his connections with the Belgian Royal Family and the Belgian queen mother to get access with a personal envoy to the White House's Oval Office. Some say that as a result of Einstein's letter and his meetings with Roosevelt, the US entered the "race" to develop the bomb, drawing on its "immense material, financial, and scientific resources" to initiate the Manhattan Project. For Einstein, "war was a disease ... [and] he called for resistance to war." By signing the letter to Roosevelt, some argue he went against his pacifist principles. In 1954, a year before his death, Einstein said to his old friend, Linus Pauling, "I made one great mistake in my life—when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification—the danger that the Germans would make them ..." In 1955, Einstein and ten other intellectuals and scientists, including British philosopher Bertrand Russell, signed a manifesto highlighting the danger of nuclear weapons. Einstein became an American citizen in 1940. Not long after settling into his career at the Institute for Advanced Study in Princeton, New Jersey, he expressed his appreciation of the meritocracy in American culture compared to Europe. He recognized the "right of individuals to say and think what they pleased" without social barriers. As a result, individuals were encouraged, he said, to be more creative, a trait he valued from his early education. Einstein joined the National Association for the Advancement of Colored People (NAACP) in Princeton, where he campaigned for the civil rights of African Americans. He considered racism America's "worst disease", seeing it as "handed down from one generation to the next". As part of his involvement, he corresponded with civil rights activist W. E. B. Du Bois and was prepared to testify on his behalf during his trial in 1951. When Einstein offered to be a character witness for Du Bois, the judge decided to drop the case. In 1946, Einstein visited Lincoln University in Pennsylvania, a historically black college, where he was awarded an honorary degree. Lincoln was the first university in the United States to grant college degrees to African Americans; alumni include Langston Hughes and Thurgood Marshall. Einstein gave a speech about racism in America, adding, "I do not intend to be quiet about it." A resident of Princeton recalls that Einstein had once paid the college tuition for a black student. Einstein has said, "Being a Jew myself, perhaps I can understand and empathize with how black people feel as victims of discrimination".
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,412 |
In 1918, Einstein was one of the founding members of the German Democratic Party, a liberal party. Later in his life, Einstein's political view was in favor of socialism and critical of capitalism, which he detailed in his essays such as "Why Socialism?" His opinions on the Bolsheviks also changed with time. In 1925, he criticized them for not having a 'well-regulated system of government' and called their rule a 'regime of terror and a tragedy in human history'. He later adopted a more moderated view, criticizing their methods but praising them, which is shown by his 1929 remark on Vladimir Lenin: "In Lenin I honor a man, who in total sacrifice of his own person has committed his entire energy to realizing social justice. I do not find his methods advisable. One thing is certain, however: men like him are the guardians and renewers of mankind's conscience." Einstein offered and was called on to give judgments and opinions on matters often unrelated to theoretical physics or mathematics. He strongly advocated the idea of a democratic global government that would check the power of nation-states in the framework of a world federation. He wrote "I advocate world government because I am convinced that there is no other possible way of eliminating the most terrible danger in which man has ever found himself." The FBI created a secret dossier on Einstein in 1932, and by the time of his death his FBI file was 1,427 pages long. Einstein was deeply impressed by Mahatma Gandhi, with whom he exchanged written letters. He described Gandhi as "a role model for the generations to come". The initial connection was established on 27 September 1931, when Wilfrid Israel took his Indian guest V. A. Sundaram to meet his friend Einstein at his summer home in the town of Caputh. Sundaram was Gandhi's disciple and special envoy, whom Wilfrid Israel met while visiting India and visiting the Indian leader's home in 1925. During the visit, Einstein wrote a short letter to Gandhi that was delivered to him through his envoy, and Gandhi responded quickly with his own letter. Although in the end Einstein and Gandhi were unable to meet as they had hoped, the direct connection between them was established through Wilfrid Israel. Einstein was a figurehead leader in helping establish the Hebrew University of Jerusalem, which opened in 1925, and was among its first Board of Governors. Earlier, in 1921, he was asked by the biochemist and president of the World Zionist Organization, Chaim Weizmann, to help raise funds for the planned university. He made suggestions for the creation of an Institute of Agriculture, a Chemical Institute and an Institute of Microbiology in order to fight the various ongoing epidemics such as malaria, which he called an "evil" that was undermining a third of the country's development. He also promoted the establishment of an Oriental Studies Institute, to include language courses given in both Hebrew and Arabic.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,415 |
Einstein was not a nationalist and was against the creation of an independent Jewish state, which would be established without his help as Israel in 1948. He felt that the waves of arriving Jews of the Aliyah could live alongside existing Arabs in Palestine. Nevertheless, upon the death of Israeli president Weizmann in November 1952, Prime Minister David Ben-Gurion offered Einstein the largely ceremonial position of President of Israel at the urging of Ezriel Carlebach. The offer was presented by Israel's ambassador in Washington, Abba Eban, who explained that the offer "embodies the deepest respect which the Jewish people can repose in any of its sons". Einstein wrote that he was "deeply moved", but "at once saddened and ashamed" that he could not accept it. Einstein spoke of his spiritual outlook in a wide array of original writings and interviews. He said he had sympathy for the impersonal pantheistic God of Baruch Spinoza's philosophy. He did not believe in a personal god who concerns himself with fates and actions of human beings, a view which he described as naïve. He clarified, however, that "I am not an atheist", preferring to call himself an agnostic, or a "deeply religious nonbeliever". When asked if he believed in an afterlife, Einstein replied, "No. And one life is enough for me." Einstein was primarily affiliated with non-religious humanist and Ethical Culture groups in both the UK and US. He served on the advisory board of the First Humanist Society of New York, and was an honorary associate of the Rationalist Association, which publishes "New Humanist" in Britain. For the 75th anniversary of the New York Society for Ethical Culture, he stated that the idea of Ethical Culture embodied his personal conception of what is most valuable and enduring in religious idealism. He observed, "Without 'ethical culture' there is no salvation for humanity." In a German-language letter to philosopher Eric Gutkind, dated 3 January 1954, Einstein wrote:The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honorable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this. ... For me the Jewish religion like all other religions is an incarnation of the most childish superstitions. And the Jewish people to whom I gladly belong and with whose mentality I have a deep affinity have no different quality for me than all other people. ... I cannot see anything 'chosen' about them.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,419 |
Einstein had been sympathetic toward vegetarianism for a long time. In a letter in 1930 to Hermann Huth, vice-president of the German Vegetarian Federation (Deutsche Vegetarier-Bund), he wrote:Although I have been prevented by outward circumstances from observing a strictly vegetarian diet, I have long been an adherent to the cause in principle. Besides agreeing with the aims of vegetarianism for aesthetic and moral reasons, it is my view that a vegetarian manner of living by its purely physical effect on the human temperament would most beneficially influence the lot of mankind. He became a vegetarian himself only during the last part of his life. In March 1954 he wrote in a letter: "So I am living without fats, without meat, without fish, but am feeling quite well this way. It almost seems to me that man was not born to be a carnivore." His mother played the piano reasonably well and wanted her son to learn the violin, not only to instill in him a love of music but also to help him assimilate into German culture. According to conductor Leon Botstein, Einstein began playing when he was 5. However, he did not enjoy it at that age. When he turned 13, he discovered the violin sonatas of Mozart, whereupon he became enamored of Mozart's compositions and studied music more willingly. Einstein taught himself to play without "ever practicing systematically". He said that "love is a better teacher than a sense of duty." At the age of 17, he was heard by a school examiner in Aarau while playing Beethoven's violin sonatas. The examiner stated afterward that his playing was "remarkable and revealing of 'great insight. What struck the examiner, writes Botstein, was that Einstein "displayed a deep love of the music, a quality that was and remains in short supply. Music possessed an unusual meaning for this student." Music took on a pivotal and permanent role in Einstein's life from that period on. Although the idea of becoming a professional musician himself was not on his mind at any time, among those with whom Einstein played chamber music were a few professionals, including Kurt Appelbaum, and he performed for private audiences and friends. Chamber music had also become a regular part of his social life while living in Bern, Zürich, and Berlin, where he played with Max Planck and his son, among others. He is sometimes erroneously credited as the editor of the 1937 edition of the Köchel catalog of Mozart's work; that edition was prepared by Alfred Einstein, who may have been a distant relation.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,424 |
In 1931, while engaged in research at the California Institute of Technology, he visited the Zoellner family conservatory in Los Angeles, where he played some of Beethoven and Mozart's works with members of the Zoellner Quartet. Near the end of his life, when the young Juilliard Quartet visited him in Princeton, he played his violin with them, and the quartet was "impressed by Einstein's level of coordination and intonation". On 17 April 1955, Einstein experienced internal bleeding caused by the rupture of an abdominal aortic aneurysm, which had previously been reinforced surgically by Rudolph Nissen in 1948. He took the draft of a speech he was preparing for a television appearance commemorating the state of Israel's seventh anniversary with him to the hospital, but he did not live to complete it. Einstein refused surgery, saying, "I want to go when I want. It is tasteless to prolong life artificially. I have done my share; it is time to go. I will do it elegantly." He died in the University Medical Center of Princeton at Plainsboro early the next morning at the age of 76, having continued to work until near the end. During the autopsy, the pathologist Thomas Stoltz Harvey removed Einstein's brain for preservation without the permission of his family, in the hope that the neuroscience of the future would be able to discover what made Einstein so intelligent. Einstein's remains were cremated in Trenton, New Jersey, and his ashes were scattered at an undisclosed location. In a memorial lecture delivered on 13 December 1965 at UNESCO headquarters, nuclear physicist J. Robert Oppenheimer summarized his impression of Einstein as a person: "He was almost wholly without sophistication and wholly without worldliness ... There was always with him a wonderful purity at once childlike and profoundly stubborn." Einstein bequeathed his personal archives, library, and intellectual assets to the Hebrew University of Jerusalem in Israel. Throughout his life, Einstein published hundreds of books and articles. He published more than 300 scientific papers and 150 non-scientific ones. On 5 December 2014, universities and archives announced the release of Einstein's papers, comprising more than 30,000 unique documents. Einstein's intellectual achievements and originality have made the word "Einstein" synonymous with "genius". In addition to the work he did by himself he also collaborated with other scientists on additional projects including the Bose–Einstein statistics, the Einstein refrigerator and others. The "Annus Mirabilis" papers are four articles pertaining to the photoelectric effect (which gave rise to quantum theory), Brownian motion, the special theory of relativity, and E = mc that Einstein published in the "Annalen der Physik" scientific journal in 1905. These four works contributed substantially to the foundation of modern physics and changed views on space, time, and matter. The four papers are:
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,432 |
Einstein's first paper submitted in 1900 to "Annalen der Physik" was on capillary attraction. It was published in 1901 with the title "Folgerungen aus den Capillaritätserscheinungen", which translates as "Conclusions from the capillarity phenomena". Two papers he published in 1902–1903 (thermodynamics) attempted to interpret atomic phenomena from a statistical point of view. These papers were the foundation for the 1905 paper on Brownian motion, which showed that Brownian movement can be construed as firm evidence that molecules exist. His research in 1903 and 1904 was mainly concerned with the effect of finite atomic size on diffusion phenomena. Einstein returned to the problem of thermodynamic fluctuations, giving a treatment of the density variations in a fluid at its critical point. Ordinarily the density fluctuations are controlled by the second derivative of the free energy with respect to the density. At the critical point, this derivative is zero, leading to large fluctuations. The effect of density fluctuations is that light of all wavelengths is scattered, making the fluid look milky white. Einstein relates this to Rayleigh scattering, which is what happens when the fluctuation size is much smaller than the wavelength, and which explains why the sky is blue. Einstein quantitatively derived critical opalescence from a treatment of density fluctuations, and demonstrated how both the effect and Rayleigh scattering originate from the atomistic constitution of matter. Einstein's ""Zur Elektrodynamik bewegter Körper"" ("On the Electrodynamics of Moving Bodies") was received on 30 June 1905 and published 26 September of that same year. It reconciled conflicts between Maxwell's equations (the laws of electricity and magnetism) and the laws of Newtonian mechanics by introducing changes to the laws of mechanics. Observationally, the effects of these changes are most apparent at high speeds (where objects are moving at speeds close to the speed of light). The theory developed in this paper later became known as Einstein's special theory of relativity. There is evidence from Einstein's writings that he collaborated with his first wife, Mileva Marić, on this work. The decision to publish only under his name seems to have been mutual, but the exact reason is unknown. This paper predicted that, when measured in the frame of a relatively moving observer, a clock carried by a moving body would appear to slow down, and the body itself would contract in its direction of motion. This paper also argued that the idea of a luminiferous aether—one of the leading theoretical entities in physics at the time—was superfluous.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,436 |
In his paper on mass–energy equivalence, Einstein produced "E" = "mc" as a consequence of his special relativity equations. Einstein's 1905 work on relativity remained controversial for many years, but was accepted by leading physicists, starting with Max Planck. Einstein originally framed special relativity in terms of kinematics (the study of moving bodies). In 1908, Hermann Minkowski reinterpreted special relativity in geometric terms as a theory of spacetime. Einstein adopted Minkowski's formalism in his 1915 general theory of relativity. General relativity (GR) is a theory of gravitation that was developed by Einstein between 1907 and 1915. According to general relativity, the observed gravitational attraction between masses results from the warping of space and time by those masses. General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape. As Einstein later said, the reason for the development of general relativity was that the preference of inertial motions within special relativity was unsatisfactory, while a theory which from the outset prefers no state of motion (even accelerated ones) should appear more satisfactory. Consequently, in 1907 he published an article on acceleration under special relativity. In that article titled "On the Relativity Principle and the Conclusions Drawn from It", he argued that free fall is really inertial motion, and that for a free-falling observer the rules of special relativity must apply. This argument is called the equivalence principle. In the same article, Einstein also predicted the phenomena of gravitational time dilation, gravitational redshift and deflection of light. In 1911, Einstein published another article "On the Influence of Gravitation on the Propagation of Light" expanding on the 1907 article, in which he estimated the amount of deflection of light by massive bodies. Thus, the theoretical prediction of general relativity could for the first time be tested experimentally. In 1916, Einstein predicted gravitational waves, ripples in the curvature of spacetime which propagate as waves, traveling outward from the source, transporting energy as gravitational radiation. The existence of gravitational waves is possible under general relativity due to its Lorentz invariance which brings the concept of a finite speed of propagation of the physical interactions of gravity with it. By contrast, gravitational waves cannot exist in the Newtonian theory of gravitation, which postulates that the physical interactions of gravity propagate at infinite speed. The first, indirect, detection of gravitational waves came in the 1970s through observation of a pair of closely orbiting neutron stars, PSR B1913+16. The explanation of the decay in their orbital period was that they were emitting gravitational waves. Einstein's prediction was confirmed on 11 February 2016, when researchers at LIGO published the first observation of gravitational waves, detected on Earth on 14 September 2015, nearly one hundred years after the prediction.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,443 |
While developing general relativity, Einstein became confused about the gauge invariance in the theory. He formulated an argument that led him to conclude that a general relativistic field theory is impossible. He gave up looking for fully generally covariant tensor equations and searched for equations that would be invariant under general linear transformations only. In June 1913, the Entwurf ('draft') theory was the result of these investigations. As its name suggests, it was a sketch of a theory, less elegant and more difficult than general relativity, with the equations of motion supplemented by additional gauge fixing conditions. After more than two years of intensive work, Einstein realized that the hole argument was mistaken and abandoned the theory in November 1915. In 1917, Einstein applied the general theory of relativity to the structure of the universe as a whole. He discovered that the general field equations predicted a universe that was dynamic, either contracting or expanding. As observational evidence for a dynamic universe was not known at the time, Einstein introduced a new term, the cosmological constant, to the field equations, in order to allow the theory to predict a static universe. The modified field equations predicted a static universe of closed curvature, in accordance with Einstein's understanding of Mach's principle in these years. This model became known as the Einstein World or Einstein's static universe. Following the discovery of the recession of the nebulae by Edwin Hubble in 1929, Einstein abandoned his static model of the universe, and proposed two dynamic models of the cosmos, The Friedmann-Einstein universe of 1931 and the Einstein–de Sitter universe of 1932. In each of these models, Einstein discarded the cosmological constant, claiming that it was "in any case theoretically unsatisfactory". In many Einstein biographies, it is claimed that Einstein referred to the cosmological constant in later years as his "biggest blunder", based on a letter George Gamow claimed to have received from him. The astrophysicist Mario Livio has recently cast doubt on this claim. In late 2013, a team led by the Irish physicist Cormac O'Raifeartaigh discovered evidence that, shortly after learning of Hubble's observations of the recession of the nebulae, Einstein considered a steady-state model of the universe. In a hitherto overlooked manuscript, apparently written in early 1931, Einstein explored a model of the expanding universe in which the density of matter remains constant due to a continuous creation of matter, a process he associated with the cosmological constant. As he stated in the paper, "In what follows, I would like to draw attention to a solution to equation (1) that can account for Hubbel's ["sic"] facts, and in which the density is constant over time" ... "If one considers a physically bounded volume, particles of matter will be continually leaving it. For the density to remain constant, new particles of matter must be continually formed in the volume from space."
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,449 |
It thus appears that Einstein considered a steady-state model of the expanding universe many years before Hoyle, Bondi and Gold. However, Einstein's steady-state model contained a fundamental flaw and he quickly abandoned the idea. General relativity includes a dynamical spacetime, so it is difficult to see how to identify the conserved energy and momentum. Noether's theorem allows these quantities to be determined from a Lagrangian with translation invariance, but general covariance makes translation invariance into something of a gauge symmetry. The energy and momentum derived within general relativity by Noether's prescriptions do not make a real tensor for this reason. Einstein argued that this is true for a fundamental reason: the gravitational field could be made to vanish by a choice of coordinates. He maintained that the non-covariant energy momentum pseudotensor was, in fact, the best description of the energy momentum distribution in a gravitational field. This approach has been echoed by Lev Landau and Evgeny Lifshitz, and others, and has become standard. The use of non-covariant objects like pseudotensors was heavily criticized in 1917 by Erwin Schrödinger and others. In 1935, Einstein collaborated with Nathan Rosen to produce a model of a wormhole, often called Einstein–Rosen bridges. His motivation was to model elementary particles with charge as a solution of gravitational field equations, in line with the program outlined in the paper "Do Gravitational Fields play an Important Role in the Constitution of the Elementary Particles?". These solutions cut and pasted Schwarzschild black holes to make a bridge between two patches. If one end of a wormhole was positively charged, the other end would be negatively charged. These properties led Einstein to believe that pairs of particles and antiparticles could be described in this way. In order to incorporate spinning point particles into general relativity, the affine connection needed to be generalized to include an antisymmetric part, called the torsion. This modification was made by Einstein and Cartan in the 1920s. The theory of general relativity has a fundamental lawthe Einstein field equations, which describe how space curves. The geodesic equation, which describes how particles move, may be derived from the Einstein field equations. Since the equations of general relativity are non-linear, a lump of energy made out of pure gravitational fields, like a black hole, would move on a trajectory which is determined by the Einstein field equations themselves, not by a new law. So Einstein proposed that the path of a singular solution, like a black hole, would be determined to be a geodesic from general relativity itself.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,458 |
This was established by Einstein, Infeld, and Hoffmann for pointlike objects without angular momentum, and by Roy Kerr for spinning objects. In a 1905 paper, Einstein postulated that light itself consists of localized particles ("quanta"). Einstein's light quanta were nearly universally rejected by all physicists, including Max Planck and Niels Bohr. This idea only became universally accepted in 1919, with Robert Millikan's detailed experiments on the photoelectric effect, and with the measurement of Compton scattering. Einstein concluded that each wave of frequency "f" is associated with a collection of photons with energy "hf" each, where "h" is Planck's constant. He does not say much more, because he is not sure how the particles are related to the wave. But he does suggest that this idea would explain certain experimental results, notably the photoelectric effect. In 1907, Einstein proposed a model of matter where each atom in a lattice structure is an independent harmonic oscillator. In the Einstein model, each atom oscillates independently—a series of equally spaced quantized states for each oscillator. Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Peter Debye refined this model. Throughout the 1910s, quantum mechanics expanded in scope to cover many different systems. After Ernest Rutherford discovered the nucleus and proposed that electrons orbit like planets, Niels Bohr was able to show that the same quantum mechanical postulates introduced by Planck and developed by Einstein would explain the discrete motion of electrons in atoms, and the periodic table of the elements. Einstein contributed to these developments by linking them with the 1898 arguments Wilhelm Wien had made. Wien had shown that the hypothesis of adiabatic invariance of a thermal equilibrium state allows all the blackbody curves at different temperature to be derived from one another by a simple shifting process. Einstein noted in 1911 that the same adiabatic principle shows that the quantity which is quantized in any mechanical motion must be an adiabatic invariant. Arnold Sommerfeld identified this adiabatic invariant as the action variable of classical mechanics. In 1924, Einstein received a description of a statistical model from Indian physicist Satyendra Nath Bose, based on a counting method that assumed that light could be understood as a gas of indistinguishable particles. Einstein noted that Bose's statistics applied to some atoms as well as to the proposed light particles, and submitted his translation of Bose's paper to the "Zeitschrift für Physik". Einstein also published his own articles describing the model and its implications, among them the Bose–Einstein condensate phenomenon that some particulates should appear at very low temperatures. It was not until 1995 that the first such condensate was produced experimentally by Eric Allin Cornell and Carl Wieman using ultra-cooling equipment built at the NIST–JILA laboratory at the University of Colorado at Boulder. Bose–Einstein statistics are now used to describe the behaviors of any assembly of bosons. Einstein's sketches for this project may be seen in the Einstein Archive in the library of the Leiden University.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,465 |
Although the patent office promoted Einstein to Technical Examiner Second Class in 1906, he had not given up on academia. In 1908, he became a "Privatdozent" at the University of Bern. In ""Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung"" (""), on the quantization of light, and in an earlier 1909 paper, Einstein showed that Max Planck's energy quanta must have well-defined momenta and act in some respects as independent, point-like particles. This paper introduced the "photon" concept (although the name "photon" was introduced later by Gilbert N. Lewis in 1926) and inspired the notion of wave–particle duality in quantum mechanics. Einstein saw this wave–particle duality in radiation as concrete evidence for his conviction that physics needed a new, unified foundation. In a series of works completed from 1911 to 1913, Planck reformulated his 1900 quantum theory and introduced the idea of zero-point energy in his "second quantum theory". Soon, this idea attracted the attention of Einstein and his assistant Otto Stern. Assuming the energy of rotating diatomic molecules contains zero-point energy, they then compared the theoretical specific heat of hydrogen gas with the experimental data. The numbers matched nicely. However, after publishing the findings, they promptly withdrew their support, because they no longer had confidence in the correctness of the idea of zero-point energy. In 1917, at the height of his work on relativity, Einstein published an article in "Physikalische Zeitschrift" that proposed the possibility of stimulated emission, the physical process that makes possible the maser and the laser. This article showed that the statistics of absorption and emission of light would only be consistent with Planck's distribution law if the emission of light into a mode with n photons would be enhanced statistically compared to the emission of light into an empty mode. This paper was enormously influential in the later development of quantum mechanics, because it was the first paper to show that the statistics of atomic transitions had simple laws. Einstein discovered Louis de Broglie's work and supported his ideas, which were received skeptically at first. In another major paper from this era, Einstein gave a wave equation for de Broglie waves, which Einstein suggested was the Hamilton–Jacobi equation of mechanics. This paper would inspire Schrödinger's work of 1926. Einstein played a major role in developing quantum theory, beginning with his 1905 paper on the photoelectric effect. However, he became displeased with modern quantum mechanics as it had evolved after 1925, despite its acceptance by other physicists. He was skeptical that the randomness of quantum mechanics was fundamental rather than the result of determinism, stating that God "is not playing at dice". Until the end of his life, he continued to maintain that quantum mechanics was incomplete.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,471 |
The Bohr–Einstein debates were a series of public disputes about quantum mechanics between Einstein and Niels Bohr, who were two of its founders. Their debates are remembered because of their importance to the philosophy of science. Their debates would influence later interpretations of quantum mechanics. In 1935, Einstein returned to quantum mechanics, in particular to the question of its completeness, in a collaboration with Boris Podolsky and Nathan Rosen that laid out what would become known as the EPR paradox. In a thought experiment, they considered two particles, which had interacted such that their properties were strongly correlated. No matter how far the two particles were separated, a precise position measurement on one particle would result in equally precise knowledge of the position of the other particle; likewise, a precise momentum measurement of one particle would result in equally precise knowledge of the momentum of the other particle, without needing to disturb the other particle in any way. Given Einstein's concept of local realism, there were two possibilities: (1) either the other particle had these properties already determined, or (2) the process of measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. Einstein rejected this second possibility (popularly called "spooky action at a distance"). Einstein's belief in local realism led him to assert that, while the correctness of quantum mechanics was not in question, it must be incomplete. But as a physical principle, local realism was shown to be incorrect when the Aspect experiment of 1982 confirmed Bell's theorem, which J. S. Bell had delineated in 1964. The results of these and subsequent experiments demonstrate that quantum physics cannot be represented by any version of the picture of physics in which "particles are regarded as unconnected independent classical-like entities, each one being unable to communicate with the other after they have separated." Although Einstein was wrong about local realism, his clear prediction of the unusual properties of its opposite, entangled quantum states, has resulted in the EPR paper becoming among the most influential papers published in "Physical Review". It is considered a centerpiece of the development of quantum information theory. Following his research on general relativity, Einstein attempted to generalize his theory of gravitation to include electromagnetism as aspects of a single entity. In 1950, he described his "unified field theory" in a "Scientific American" article titled "On the Generalized Theory of Gravitation". Although he was lauded for this work, his efforts were ultimately unsuccessful. Notably, Einstein's unification project did not accommodate the strong and weak nuclear forces, neither of which was well understood until many years after his death. Although mainstream physics long ignored Einstein's approaches to unification, Einstein's work has motivated modern quests for a theory of everything, in particular string theory, where geometrical fields emerge in a unified quantum-mechanical setting.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,478 |
Einstein conducted other investigations that were unsuccessful and abandoned. These pertain to force, superconductivity, and other research. In addition to longtime collaborators Leopold Infeld, Nathan Rosen, Peter Bergmann and others, Einstein also had some one-shot collaborations with various scientists. Einstein and De Haas demonstrated that magnetization is due to the motion of electrons, nowadays known to be the spin. In order to show this, they reversed the magnetization in an iron bar suspended on a torsion pendulum. They confirmed that this leads the bar to rotate, because the electron's angular momentum changes as the magnetization changes. This experiment needed to be sensitive because the angular momentum associated with electrons is small, but it definitively established that electron motion of some kind is responsible for magnetization. Einstein suggested to Erwin Schrödinger that he might be able to reproduce the statistics of a Bose–Einstein gas by considering a box. Then to each possible quantum motion of a particle in a box associate an independent harmonic oscillator. Quantizing these oscillators, each level will have an integer occupation number, which will be the number of particles in it. This formulation is a form of second quantization, but it predates modern quantum mechanics. Erwin Schrödinger applied this to derive the thermodynamic properties of a semiclassical ideal gas. Schrödinger urged Einstein to add his name as co-author, although Einstein declined the invitation. In 1926, Einstein and his former student Leó Szilárd co-invented (and in 1930, patented) the Einstein refrigerator. This absorption refrigerator was then revolutionary for having no moving parts and using only heat as an input. On 11 November 1930, was awarded to Einstein and Leó Szilárd for the refrigerator. Their invention was not immediately put into commercial production, and the most promising of their patents were acquired by the Swedish company Electrolux. While traveling, Einstein wrote daily to his wife Elsa and adopted stepdaughters Margot and Ilse. The letters were included in the papers bequeathed to the Hebrew University of Jerusalem. Margot Einstein permitted the personal letters to be made available to the public, but requested that it not be done until twenty years after her death (she died in 1986). Barbara Wolff, of the Hebrew University's Albert Einstein Archives, told the BBC that there are about 3,500 pages of private correspondence written between 1912 and 1955. Einstein's right of publicity was litigated in 2015 in a federal district court in California. Although the court initially held that the right had expired, that ruling was immediately appealed, and the decision was later vacated in its entirety. The underlying claims between the parties in that lawsuit were ultimately settled. The right is enforceable, and the Hebrew University of Jerusalem is the exclusive representative of that right. Corbis, successor to The Roger Richman Agency, licenses the use of his name and associated imagery, as agent for the university.
|
Albert Einstein
|
https://en.wikipedia.org/wiki?curid=736
| 11,486 |
Mount Einstein in New Zealand's Paparoa Range was named after him in 1970 by the Department of Scientific and Industrial Research. Einstein became one of the most famous scientific celebrities, beginning with the confirmation of his theory of general relativity in 1919. Despite the general public having little understanding of his work, he was widely recognized and received adulation and publicity. In the period before World War II, "The New Yorker" published a vignette in their "The Talk of the Town" feature saying that Einstein was so well known in America that he would be stopped on the street by people wanting him to explain "that theory". He finally figured out a way to handle the incessant inquiries. He told his inquirers, "Pardon me, sorry! Always I am mistaken for Professor Einstein." Einstein has been the subject of or inspiration for many novels, films, plays, and works of music. He is a favorite model for depictions of absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. "Time" magazine's Frederic Golden wrote that Einstein was "a cartoonist's dream come true". Einstein received numerous awards and honors, and in 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". None of the nominations in 1921 met the criteria set by Alfred Nobel, so the 1921 prize was carried forward and awarded to Einstein in 1922.
|
Periodic table
|
https://en.wikipedia.org/wiki?curid=23053
| 11,924 |
Another important property of elements is their electronegativity. Atoms can form covalent bonds to each other by sharing electrons in pairs, creating an overlap of valence orbitals. The degree to which each atom attracts the shared electron pair depends on the atom's electronegativity – the tendency of an atom towards gaining or losing electrons. The more electronegative atom will tend to attract the electron pair more, and the less electronegative (or more electropositive) one will attract it less. In extreme cases, the electron can be thought of as having been passed completely from the more electropositive atom to the more electronegative one, though this is a simplification. The bond then binds two ions, one positive (having given up the electron) and one negative (having accepted it), and is termed an ionic bond. The first one to systematically expand and correct the chemical potentials of Bohr's atomic theory was Walther Kossel in 1914 and in 1916. Kossel explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." A significant controversy arose with elements 102 through 106 in the 1960s and 1970s, as competition arose between the LBNL team (now led by Albert Ghiorso) and a team of Soviet scientists at the Joint Institute for Nuclear Research (JINR) led by Georgy Flyorov. Each team claimed discovery, and in some cases each proposed their own name for the element, creating an element naming controversy that lasted decades. These elements were made by bombardment of actinides with light ions. IUPAC at first adopted a hands-off approach, preferring to wait and see if a consensus would be forthcoming. Unfortunately, it was also the height of the Cold War, and it became clear after some time that this would not happen. As such, IUPAC and the International Union of Pure and Applied Physics (IUPAP) created a Transfermium Working Group (TWG, fermium being element 100) in 1985 to set out criteria for discovery, which were published in 1991. After some further controversy, these elements received their final names in 1997, including seaborgium (106) in honour of Seaborg.
|
Periodic table
|
https://en.wikipedia.org/wiki?curid=23053
| 11,971 |
Even if eighth-row elements can exist, producing them is likely to be difficult, and it should become even more difficult as atomic number rises. Although the 8s elements are expected to be reachable with present means, the first few elements are expected to require new technology, if they can be produced at all. Experimentally characterising these elements chemically would also pose a great challenge.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,320 |
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The "Oxford English Dictionary" of Oxford University Press defines artificial intelligence as: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia. The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques – including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields. The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,327 |
This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. and have been common in fiction, as in Mary Shelley's "Frankenstein" or Karel Čapek's "R.U.R." These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence. The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight that digital computers can simulate any process of formal reasoning is known as the Church–Turing thesis. This, along with concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". By the 1950s, two visions for how to achieve machine intelligence emerged. One vision, known as Symbolic AI or GOFAI, was to use computers to create a symbolic representation of the world and systems that could reason about the world. Proponents included Allen Newell, Herbert A. Simon, and Marvin Minsky. Closely associated with this approach was the "heuristic search" approach, which likened intelligence to a problem of exploring a space of possibilities for answers. The second vision, known as the connectionist approach, sought to achieve intelligence through learning. Proponents of this approach, most prominently Frank Rosenblatt, sought to connect Perceptron in ways inspired by connections of neurons. James Manyika and others have compared the two approaches to the mind (Symbolic AI) and the brain (connectionist). Manyika argues that symbolic approaches dominated the push for artificial intelligence in this period, due in part to its connection to intellectual traditions of Descarte, Boole, Gottlob Frege, Bertrand Russell, and others. Connectionist approaches based on cybernetics or artificial neural networks were pushed to the background but have gained new prominence in recent decades. computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,333 |
Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field. Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult. a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. Many researchers began to doubt that the symbolic approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems. Robotics researchers, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move, survive, and learn their environment. Interest in neural networks and "connectionism" was revived by Geoffrey Hinton, David Rumelhart and others in the middle of the 1980s. Soft computing tools were developed in the 1980s, such as neural networks, fuzzy systems, Grey system theory, evolutionary computation and many tools drawn from statistics or mathematical optimization. AI gradually restored its reputation in the late 1990s and early 21st century by finding specific solutions to specific problems. The narrow focus allowed researchers to produce verifiable results, exploit more mathematical methods, and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence". Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,345 |
According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes". The amount of research into AI (measured by total publications) increased by 50% in the years 2015–2019. Numerous academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Much of current research involves statistical AI, which is overwhelmingly used to solve specific problems, even highly successful techniques such as deep learning. This concern has led to the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. A representation of "what exists" is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge and act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). A truly intelligent program would also need access to commonsense knowledge; the set of facts that an average person knows. The semantics of an ontology is typically represented in description logic, such as the Web Ontology Language. AI research has developed tools to represent specific domains, such as objects, properties, categories and relations between objects;
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,355 |
default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); as well as other domains. Among the most difficult problems in AI are: the breadth of commonsense knowledge (the number of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). Unsupervised learning finds patterns in a stream of input. Supervised learning requires a human to label the input data first, and comes in two main varieties: classification and numerical regression. Classification is used to determine what category something belongs in – the program sees a number of examples of things from several categories and will learn to classify new inputs. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent classifies its responses to form a strategy for operating in its problem space. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization. allows machines to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of NLP include information retrieval, question answering and machine translation. Symbolic AI used formal syntax to translate the deep structure of sentences into logic. This failed to produce useful applications, due to the intractability of logic and the breadth of commonsense knowledge. Modern statistical techniques include co-occurrence frequencies (how often one word appears near another), "Keyword spotting" (searching for a particular word to retrieve information), transformer-based deep learning (which finds patterns in text), and others. They have achieved acceptable accuracy at the page or paragraph level, and, by 2019, could generate coherent text. is the ability to use input from sensors (such as cameras, microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,364 |
Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction. However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject. A machine with general intelligence can solve a wide variety of problems with breadth and versatility similar to human intelligence. There are several competing ideas about how to develop artificial general intelligence. Hans Moravec and Marvin Minsky argue that work in different individual domains can be incorporated into an advanced multi-agent system or cognitive architecture with general intelligence. Pedro Domingos hopes that there is a conceptually straightforward, but mathematically difficult, "master algorithm" that could lead to AGI. AI can solve many problems by intelligently searching through many possible solutions. Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies, heuristics can also serve to eliminate some choices unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies. A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other related optimization algorithms include random optimization, beam search and metaheuristics like simulated annealing. Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming. Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,372 |
is used for knowledge representation and problem-solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning Several different forms of logic are used in AI research. Propositional logic involves truth functions such as "or" and "not". First-order logic adds quantifiers and predicates and can express facts about objects, their properties, and their relations with each other. Fuzzy logic assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry), that are too linguistically imprecise to be completely true or false. Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as description logics; Logics to model contradictory or inconsistent statements arising in multi-agent systems have also been designed, such as paraconsistent logics. Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. are a very general tool that can be used for various problems, including reasoning (using the Bayesian inference algorithm), Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). A key concept from the science of economics is "utility", a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design. The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if diamond then pick up"). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine the closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class is a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,384 |
A classifier can be trained in various ways; there are many statistical and machine learning approaches. The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as "naive Bayes" on most practical data sets. were inspired by the architecture of neurons in the human brain. A simple "neuron" "N" accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron "N" should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks model complex relationships between inputs and outputs and find patterns in data. They can learn continuous functions and even digital logical operations. Neural networks can be viewed as a type of mathematical optimization – they perform gradient descent on a multi-dimensional topology that was created by training the network. The most common training technique is the backpropagation algorithm. Other learning techniques for neural networks are Hebbian learning ("fire together, wire together"), GMDH or competitive learning. The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks. uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces. Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, image classification and others.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,392 |
Deep learning often uses convolutional neural networks for many or all of its layers. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. This can substantially reduce the number of weighted connections between neurons, and creates a hierarchy similar to the organization of the animal visual cortex. however long-term gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity), known as the vanishing gradient problem. Specialized languages for artificial intelligence have been developed, such as Lisp, Prolog, TensorFlow and many others. Hardware developed for AI includes AI accelerators and neuromorphic computing. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect. In the 2010s, AI applications were at the heart of the most commercially successful areas of computing, and have become a ubiquitous feature of daily life. AI is used in search engines (such as Google Search), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones and self-driving cars), There are also thousands of successful AI applications used to solve problems for specific industries or institutions. A few examples are energy storage, deepfakes, medical diagnosis, military logistics, or supply chain management. Game playing has been a test of AI's strength since the 1950s. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a "Jeopardy!" quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest "Jeopardy!" champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus and Cepheus. DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own. By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining a commonsense understanding of the contents of the benchmarks.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,402 |
DeepMind's AlphaFold 2 (2020) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. Other applications predict the result of judicial decisions, create art (such as poetry or painting) and prove mathematical theorems. In 2019, WIPO reported that AI was the most prolific emerging technology in terms of number of patent applications and granted patents, the Internet of things was estimated to be the largest in terms of market size. It was followed, again in market size, by big data technologies, robotics, AI, 3D printing and the fifth generation of mobile services (5G). Since AI emerged in the 1950s, 340,000 AI-related patent applications were filed by innovators and 1.6 million scientific papers have been published by researchers, with the majority of all AI-related patent filings published since 2013. Companies represent 26 out of the top 30 AI patent applicants, with universities or public research organizations accounting for the remaining four. The ratio of scientific papers to inventions has significantly decreased from 8:1 in 2010 to 3:1 in 2016, which is attributed to be indicative of a shift from theoretical research to the use of AI technologies in commercial products and services. Machine learning is the dominant AI technique disclosed in patents and is included in more than one-third of all identified inventions (134,777 machine learning patents filed for a total of 167,038 AI patents filed in 2016), with computer vision being the most popular functional application. AI-related patents not only disclose AI techniques and applications, they often also refer to an application field or industry. Twenty application fields were identified in 2016 and included, in order of magnitude: telecommunications (15 percent), transportation (15 percent), life and medical sciences (12 percent), and personal devices, computing and human–computer interaction (11 percent). Other sectors included banking, entertainment, security, industry and manufacturing, agriculture, and networks (including social networks, smart cities and the Internet of things). IBM has the largest portfolio of AI patents with 8,290 patent applications, followed by Microsoft with 5,930 patent applications. He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks"
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,407 |
Russell and Norvig agree with Turing that AI must be defined in terms of "acting" and not "thinking". However, they are critical that the test compares machines to "people". "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons. AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence". McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world." Another AI founder, Marvin Minsky similarly defines it as "the ability to solve hard problems". These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine -- and no other philosophical discussion is required, or may not even be possible. This definition stipulated the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence. No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, neat, soft and narrow (see below). Critics argue that these questions may have to be revisited by future generations of AI researchers. Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,415 |
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neurosymbolic artificial intelligence attempts to bridge the two approaches. "Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems (especially in areas like common sense reasoning). This issue was actively discussed in the 70s and 80s, but in the 1990s mathematical methods and solid scientific standards became the norm, a transition that Russell and Norvig termed "the victory of the neats". Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 80s and most successful AI programs in the 21st century are examples of soft computing with neural networks. AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence (general AI) directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively. The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the [philosophy of AI] – as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,422 |
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this "feels" or why it should feel like anything at all. Human information processing is easy to explain, however, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to "know what red looks like". Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind. If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so, then it could also "suffer", and thus it would be entitled to certain rights. and is now being considered by, for example, California's Institute for the Future; however, critics argue that the discussion is premature. A superintelligence, hyperintelligence, or superhuman intelligence, is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. Its intelligence would increase exponentially in an intelligence explosion and could dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario the "singularity". Because it is difficult or impossible to know the limits of intelligence or the capabilities of superintelligent machines, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,432 |
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998. In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk". Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; "The Economist" states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. AI provides a number of tools that are particularly useful for authoritarian governments: smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation; advanced AI can make centralized decision making more competitive with liberal and decentralized systems such as markets. Terrorists, criminals and rogue states may use other forms of weaponized AI such as advanced digital warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching battlefield robots. Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours. AI programs can become biased after learning from real-world data. It is not typically introduced by the system designers but is learned by the program, and thus the programmers are often unaware that the bias exists.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,443 |
It can also emerge from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be unfair. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be overestimated than that of white defendants, despite the fact that the program was not told the races of the defendants. Other examples where algorithmic bias can lead to unfair outcomes are when AI is used for credit rating or hiring. At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed. Superintelligent AI may be able to improve itself to the point that humans could not control it. This could, as physicist Stephen Hawking puts it, "spell the end of the human race". Philosopher Nick Bostrom argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's, it might need to harm humanity to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal. He concludes that AI poses a risk to mankind, however humble or "friendly" its stated goals might be. Political scientist Charles T. Rubin argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no "a priori" reason to believe that they would share our system of morality. The opinion of experts and industry insiders is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI. Stephen Hawking, Microsoft founder Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have all expressed serious misgivings about the future of AI. Prominent tech titans including Peter Thiel (Amazon Web Services) and Musk have committed more than $1 billion to nonprofit companies that champion responsible AI development, such as OpenAI and the Future of Life Institute.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,451 |
Mark Zuckerberg (CEO, Facebook) has said that artificial intelligence is helpful in its current form and will continue to assist humans. AI's decisions making abilities raises the questions of legal responsibility and copyright status of created works. This issues are being refined in various jurisdictions. Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, US and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. A common trope in these works began with Mary Shelley's "Frankenstein", where a human creation becomes a threat to its masters. This includes such works as and "" (both 1968), with HAL 9000, the murderous computer in charge of the "Discovery One" spaceship, as well as "The Terminator" (1984) and "The Matrix" (1999). In contrast, the rare loyal robots such as Gort from "The Day the Earth Stood Still" (1951) and Bishop from "Aliens" (1986) are less prominent in popular culture. Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,461 |
Transhumanism (the merging of humans and machines) is explored in the manga "Ghost in the Shell" and the science-fiction series "Dune". Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's "R.U.R.", the films "A.I. Artificial Intelligence" and "Ex Machina", as well as the novel "Do Androids Dream of Electric Sheep?", by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence. As technology and research evolve and the world enters the third revolution of warfare following gunpowder and nuclear weapons, the artificial intelligence arms race ensues between the United States, China, and Russia, three countries with the world's top five highest military budgets. Intentions of being a world leader in AI research by 2030 have been declared by China's leader Xi Jinping, and President Putin of Russia has stated that "Whoever becomes the leader in this sphere will become the ruler of the world". If Russia were to become the leader in AI research, President Putin has stated Russia's intent to share some of their research with the world so as to not monopolize the field, similar to their current sharing of nuclear technologies, maintaining science diplomacy relations. The United States, China, and Russia, are some examples of countries that have taken their stances toward military artificial intelligence since as early as 2014, having established military programs to develop cyber weapons, control lethal autonomous weapons, and drones that can be used for surveillance. President Putin announced that artificial intelligence is the future for all mankind and recognizes the power and opportunities that the development and deployment of lethal autonomous weapons AI technology can hold in warfare and homeland security, as well as its threats. President Putin's prediction that future wars will be fought using AI has started to come to fruition to an extent after Russia invaded Ukraine on 24 February 2022. The Ukrainian military is making use of the Turkish Bayraktar TB2-drones that still require human operation to deploy laser-guided bombs but can take off, land, and cruise autonomously. Ukraine has also been using Switchblade drones supplied by the US and receiving information gathering by the United States's own surveillance operations regarding battlefield intelligence and national security about Russia. Similarly, Russia can use AI to help analyze battlefield data from surveillance footage taken by drones. Reports and images show that Russia's military has deployed KUB- BLA suicide drones into Ukraine, with speculations of intentions to assassinate Ukrainian President Volodymyr Zelenskyy.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,465 |
As research in the AI realm progresses, there is pushback about the use of AI from the Campaign to Stop Killer Robots and world technology leaders have sent a petition to the United Nations calling for new regulations on the development and use of AI technologies in 2017, including a ban on the use of lethal autonomous weapons due to ethical concerns for innocent civilian populations. With the ever evolving cyber-attacks and generation of devices, AI can be used for threat detection and more effective response by risk prioritization. With this tool, some challenges are also presented such as privacy, informed consent, and responsible use. According to CISA, the cyberspace is difficult to secure for the following factors: the ability of malicious actors to operate from anywhere in the world, the linkages between cyberspace and physical systems, and the difficulty of reducing vulnerabilities and consequences in complex cyber networks. With the increased technological advances of the world, the risk for wide scale consequential events rises. Paradoxically, the ability to protect information and create a line of communication between the scientific and diplomatic community thrives. The role of cybersecurity in diplomacy has become increasingly relevant, creating the term of cyber diplomacy – which is not uniformly defined and not synonymous with cyber defence. Many nations have developed unique approaches to scientific diplomacy in cyberspace. Dating back to 2011, when the Czech National Security Authority (NSA) was appointed as the national authority for the cyber agenda. The role of cyber diplomacy strengthened in 2017 when the Czech Ministry of Foreign Affairs (MFA) detected a serious cyber campaign directed against its own computer networks. In 2016, three cyber diplomats were deployed to Washington, D.C., Brussels and Tel Aviv, with the goal of establishing active international cooperation focused on engagement with the EU and NATO. The main agenda for these scientific diplomacy efforts is to bolster research on artificial intelligence and how it can be used in cybersecurity research, development, and overall consumer trust. CzechInvest is a key stakeholder in scientific diplomacy and cybersecurity. For example, in September 2018, they organized a mission to Canada in September 2018 with a special focus on artificial intelligence. The main goal of this particular mission was a promotional effort on behalf of Prague, attempting to establish it as a future knowledge hub for the industry for interested Canadian firms. Cybersecurity is recognized as a governmental task, dividing into three ministries of responsibility: the Federal Ministry of the Interior, the Federal Ministry of Defence, and the Federal Foreign Office. These distinctions promoted the creation of various institutions, such as The German National Office for Information Security, The National Cyberdefence Centre, The German National Cyber Security Council, and The Cyber and Information Domain Service. In 2018, a new strategy for artificial intelligence was established by the German government, with the creation of a German-French virtual research and innovation network, holding opportunity for research expansion into cybersecurity.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,469 |
The adoption of "The Cybersecurity Strategy of the European Union – An Open, Safe and Secure Cyberspace" document in 2013 by the European commission pushed forth cybersecurity efforts integrated with scientific diplomacy and artificial intelligence. Efforts are strong, as the EU funds various programs and institutions in the effort to bring science to diplomacy and bring diplomacy to science. Some examples are the cyber security programme Competence Research Innovation (CONCORDIA), which brings together 14 member states, and Cybersecurity for Europe (CSE), which brings together 43 partners involving 20 member states. In addition, The European Network of Cybersecurity Centres and Competence Hub for Innovation and Operations (ECHO) gathers 30 partners with 15 member states and SPARTA gathers 44 partners involving 14 member states. These efforts reflect the overall goals of the EU, to innovate cybersecurity for defense and protection, establish a highly integrated cyberspace among many nations, and further contribute to the security of artificial intelligence. With the 2022 invasion of Ukraine, there has been a rise in malicious cyber activity against the United States, Ukraine, and Russia. A prominent and rare documented use of artificial intelligence in conflict is on behalf of Ukraine, using facial recognition software to uncover Russian assailants and identify Ukrainians killed in the ongoing war. Though these governmental figures are not primarily focused on scientific and cyber diplomacy, other institutions are commenting on the use of artificial intelligence in cybersecurity with that focus. For example, Georgetown University's Center for Security and Emerging Technology (CSET) has the Cyber-AI Project, with one goal being to attract policymakers' attention to the growing body of academic research, which exposes the exploitive consequences of AI and machine-learning (ML) algorithms. This vulnerability can be a plausible explanation as to why Russia is not engaging in the use of AI in conflict per, Andrew Lohn, a senior fellow at CSET. In addition to use on the battlefield, AI is being used by the Pentagon to analyze data from the war, analyzing to strengthen cybersecurity and warfare intelligence for the United States. As artificial intelligence grows and the overwhelming amount of news portrayed through cyberspace expands, it is becoming extremely overwhelming for a voter to know what to believe. There are many intelligent codes, referred to as bots, written to portray people on social media with the goal of spreading misinformation. The 2016 US election is a victim of such actions. During the Hillary Clinton and Donald Trump campaign, artificial intelligent bots from Russia were spreading misinformation about the candidates in order to help the Trump campaign. Analysts concluded that approximately 19% of Twitter tweets centered around the 2016 election were detected to come from bots. YouTube in recent years has been used to spread political information as well. Although there is no proof that the platform attempts to manipulate its viewers opinions, Youtubes AI algorithm recommends videos of similar variety. If a person begins to research right wing political podcasts, then YouTube's algorithm will recommend more right wing videos. The uprising in a program called Deepfake, a software used to replicate someone's face and words, has also shown its potential threat. In 2018 a Deepfake video of Barack Obama was released saying words he claims to have never said. While in a national election a Deepfake will quickly be debunked, the software has the capability to heavily sway a smaller local election. This tool holds a lot of potential for spreading misinformation and is monitored with great attention. Although it may be seen as a tool used for harm, AI can help enhance election campaigns as well. AI bots can be programed to target articles with known misinformation. The bots can then indicate what is being misinformed to help shine light on the truth. AI can also be used to inform a person where each parts stands on a certain topic such as healthcare or climate change. The political leaders of a nation have heavy sway on international affairs. Thus, a political leader with a lack of interest for international collaborative scientific advancement can have a negative impact in the scientific diplomacy of that nation
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,472 |
The use of artificial intelligence (AI) has subtly grown to become part of everyday life. It is used every day in facial recognition software. It is the first measure of security for many companies in the form of a biometric authentication. This means of authentication allows even the most official organizations such as the United States Internal Revenue Service to verify a person's identity via a database generated from machine learning. As of the year 2022, the United States IRS requires those who do not undergo a live interview with an agent to complete a biometric verification of their identity via ID.me's facial recognition tool. In Japan and South Korea, artificial intelligence software is used in the instruction of English language via the company Riiid. Riiid is a Korean education company working alongside Japan to give students the means to learn and use their English communication skills via engaging with artificial intelligence in a live chat. Riid is not the only company to do this. American company Duolingo is well known for their automated teaching of 41 languages. Babbel, a German language learning program, also uses artificial intelligence in its teaching automation, allowing for European students to learn vital communication skills needed in social, economic, and diplomatic settings. Artificial intelligence will also automate the routine tasks that teachers need to do such as grading, taking attendance, and handling routine student inquiries. This enables the teacher to carry on with the complexities of teaching that an automated machine cannot handle. These include creating exams, explaining complex material in a way that will benefit students individually and handling unique questions from students. Unlike the human brain, which possess generalized intelligence, the specialized intelligence of AI can serve as a means of support to physicians internationally. The medical field has a diverse and profound amount of data in which AI can employ to generate a predictive diagnosis. Researchers at an Oxford hospital have developed artificial intelligence that can diagnose heart scans for heart disease and cancer. This artificial intelligence can pick up diminutive details in the scans that doctors may miss. As such, artificial intelligence in medicine will better the industry, giving doctors the means to precisely diagnose their patients using the tools available. The artificial intelligence algorithms will also be used to further improve diagnosis over time, via an application of machine learning called precision medicine. Furthermore, the narrow application of artificial intelligence can use "deep learning" in order to improve medical image analysis. In radiology imaging, AI uses deep learning algorithms to identify potentially cancerous lesions which is an important process assisting in early diagnosis. Data analysis is a fundamental property of artificial intelligence that enables it to be used in every facet of life from search results to the way people buy product. According to NewVantage Partners, over 90% of top businesses have ongoing investments in artificial intelligence. According to IBM, one of the world's leaders in technology, 45% of respondents from companies with over 1,000 employees have adopted AI. Recent data shows that the business market for artificial intelligence during the year 2020 was valued at $51.08 billion. The business market for artificial intelligence is projected to be over $640.3 billion by the year 2028. To prevent harm, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,476 |
With the exponential surge of artificial technology and communication, the distribution of one's ideals and values has been evident in daily life. Digital information is spread via communication apps such as Whatsapp, Facebook/Meta, Snapchat, Instagram and Twitter. However, it is known that these sites relay specific information corresponding to data analysis. If a right-winged individual were to do a google search, Google's algorithms would target that individual and relay data pertinent to that target audience. US President Bill Clinton noted in 2000:"In the new century, liberty will spread by cell phone and cable modem. [...] We know how much the Internet has changed America, and we are already an open society. However, when the private sector uses artificial intelligence to gather data, a shift in power from the state to the private sector may be seen. This shift in power, specifically in large technological corporations, could profoundly change how diplomacy functions in society. The rise in digital technology and usage of artificial technology enabled the private sector to gather immense data on the public, which is then further categorized by race, location, age, gender, etc. "The New York Times" calculates that "the ten largest tech firms, which have become gatekeepers in commerce, finance, entertainment and communications, now have a combined market capitalization of more than $10 trillion. In gross domestic product terms, that would rank them as the world's third-largest economy." Beyond the general lobbying of congressmen/congresswomen, companies such as Facebook/Meta or Google use collected data in order to reach their intended audiences with targeted information. Multiple nations around the globe employ artificial intelligence to assist with their foreign policy decisions. The Chinese Department of External Security Affairs – under the Ministry of Foreign Affairs – uses AI to review almost all its foreign investment projects for risk mitigation. The government of China plans to use artificial intelligence in its $900 billion global infrastructure development plan, called the "Belt and Road Initiative" for political, economic, and environmental risk alleviation. Over 200 applications of artificial intelligence are being used by over 46 United Nations agencies, in sectors ranging from health care dealing with issues such as combating COVID-19 to smart agriculture, to assist the UN in political and diplomatic relations. One example is the use of AI by the UN Global Pulse program to model the effect of the spread of COVID-19 on internally displaced people (IDP) and refugee settlements to assist them in creating an appropriate global health policy.
|
Artificial intelligence
|
https://en.wikipedia.org/wiki?curid=1164
| 18,479 |
Novel AI tools such as remote sensing can also be employed by diplomats for collecting and analyzing data and near-real-time tracking of objects such as troop or refugee movements along borders in violent conflict zones. Artificial intelligence can be used to mitigate vital cross-national diplomatic talks to prevent translation errors caused by human translators. A major example is the 2021 Anchorage meetings held between US and China aimed at stabilizing foreign relations, only for it to have the opposite effect, increasing tension and aggressiveness between the two nations, due to translation errors caused by human translators. In the meeting, when United States National Security Advisor to President Joe Biden, Jacob Jeremiah Sullivan stated, "We do not seek conflict, but we welcome stiff competition and we will always stand up for our principles, for our people, and for our friends", it was mistranslated into Chinese as "we will face competition between us, and will present our stance in a very clear manner", adding an aggressive tone to the speech. AI's ability for fast and efficient natural language processing and real-time translation and transliteration makes it an important tool for foreign-policy communication between nations and prevents unintended mistranslation.
|
Stephen Curry
|
https://en.wikipedia.org/wiki?curid=5608488
| 19,838 |
During the 1992 NBA All-Star Game weekend, Curry's father entrusted him to Biserka Petrović, mother of future Hall of Fame player Dražen Petrović, while Dell competed in the Three-Point Contest. Following the 2015 NBA Finals, Curry gave Biserka one of his Finals-worn jerseys, which will reportedly be added to the collection of the Dražen Petrović Memorial Center, a museum to the late player in the Croatian capital of Zagreb. Curry suffers from keratoconus and wears contact lenses to correct his vision. Curry is also an avid golfer; he started playing golf at the age of 10, played golf in high school, and frequently plays golf with teammate Andre Iguodala. A 5-handicap golfer, Curry participates in celebrity golf tournaments and has played golf alongside Barack Obama. In August 2017, Curry competed in the Ellie Mae Classic on an unrestricted sponsor exemption. Although he missed the first cut, he scored 4-over-74 for both days he participated, surpassing most expectations for an amateur competing in the pro event. In August 2019, Curry and Howard University, a historically black institution in Washington, D.C., jointly announced that the school would add NCAA Division I teams in men's and women's golf starting in the 2020–21 school year, with Curry guaranteeing full funding of both teams for six years. Curry is also a fan of English soccer club Chelsea F.C.
|
Lockheed Martin F-35 Lightning II
|
https://en.wikipedia.org/wiki?curid=11812
| 22,145 |
The F-35 was the product of the Joint Strike Fighter (JSF) program, which was the merger of various combat aircraft programs from the 1980s and 1990s. One progenitor program was the Defense Advanced Research Projects Agency (DARPA) Advanced Short Take-Off/Vertical Landing (ASTOVL) which ran from 1983 to 1994; ASTOVL aimed to develop a Harrier jump jet replacement for the U.S. Marine Corps (USMC) and the U.K. Royal Navy. Under one of ASTOVL's classified programs, the Supersonic STOVL Fighter (SSF), Lockheed Skunk Works conducted research for a stealthy supersonic STOVL fighter intended for both U.S. Air Force (USAF) and USMC; a key technology explored was the shaft-driven lift fan (SDLF) system. Lockheed's concept was a single-engine canard delta aircraft weighing about empty. ASTOVL was rechristened as the Common Affordable Lightweight Fighter (CALF) in 1993 and involved Lockheed, McDonnell Douglas, and Boeing. The X-35A first flew on 24 October 2000 and conducted flight tests for subsonic and supersonic flying qualities, handling, range, and maneuver performance. After 28 flights, the aircraft was then converted into the X-35B for STOVL testing, with key changes including the addition of the SDLF, the three-bearing swivel module (3BSM), and roll-control ducts. The X-35B would successfully demonstrate the SDLF system by performing stable hover, vertical landing, and short takeoff in less than . The X-35C first flew on 16 December 2000 and conducted field landing carrier practice tests. As the JSF program moved into the System Development and Demonstration phase, the X-35 demonstrator design was modified to create the F-35 combat aircraft. The forward fuselage was lengthened by to make room for mission avionics, while the horizontal stabilizers were moved aft to retain balance and control. The diverterless supersonic inlet changed from a four-sided to a three-sided cowl shape and was moved aft. The fuselage section was fuller, the top surface raised by along the centerline to accommodate weapons bays. Following the designation of the X-35 prototypes, the three variants were designated F-35A (CTOL), F-35B (STOVL), and F-35C (CV), all with a design service life of 8,000 hours. Prime contractor Lockheed Martin performs overall systems integration and final assembly and checkout (FACO), while Northrop Grumman and BAE Systems supply components for mission systems and airframe.
|
Lockheed Martin F-35 Lightning II
|
https://en.wikipedia.org/wiki?curid=11812
| 22,194 |
The F-35A and F-35B were cleared for basic flight training in early 2012, although there were concerns over safety and performance due to lack of system maturity at the time. During the Low Rate Initial Production (LRIP) phase, the three U.S. military services jointly developed tactics and procedures using flight simulators, testing effectiveness, discovering problems and refining design. On 10 September 2012, the USAF began an operational utility evaluation (OUE) of the F-35A, including logistical support, maintenance, personnel training, and pilot execution. The United Kingdom's Royal Air Force and Royal Navy both operate the F-35B, known simply as the Lightning in British service; it has replaced the Harrier GR9, which was retired in 2010, and Tornado GR4, which was retired in 2019. The F-35 is to be Britain's primary strike aircraft for the next three decades. One of the Royal Navy's requirements for the F-35B was a Shipborne Rolling and Vertical Landing (SRVL) mode to increase maximum landing weight by using wing lift during landing. When operating on the aircraft carriers HMS Queen Elizabeth (R08) and HMS Prince of Wales (R09), British F-35Bs use ski-jumps. The Italian Navy use the same process. British F-35Bs are not intended to use the Brimstone 2 missile. In July 2013, Chief of the Air Staff, Air Chief Marshal Sir Stephen Dalton announced that No. 617 (The Dambusters) Squadron would be the RAF's first operational F-35 squadron. The second operational squadron will be the Fleet Air Arm's 809 Naval Air Squadron which will stand up in April 2023 or later. On 22 May 2018, IAF chief Amikam Norkin said that the service had employed their F-35Is in two attacks on two battle fronts, marking the first combat operation of an F-35 by any country. Norkin said it had been flown "all over the Middle East", and showed photos of an F-35I flying over Beirut in daylight. In July 2019, Israel expanded its strikes against Iranian missile shipments; IAF F-35Is allegedly struck Iranian targets in Iraq twice. On 11 May 2021, eight IAF F-35Is took part in an attack on 150 targets in Hamas' rocket array, including 50–70 launch pits in the northern Gaza Strip, as part of Operation Guardian of the Walls. Japan's F-35As were declared to have reached initial operational capability (IOC) on 29 March 2019. At the time Japan had taken delivery of 10 F-35As stationed in Misawa Air Base. Japan plans to eventually acquire a total of 147 F-35s, which will include 42 F-35Bs. It plans to use the latter variant to equip Japan’s Izumo-class multi-purpose destroyer.
|
Stephen Hawking
|
https://en.wikipedia.org/wiki?curid=19376148
| 26,123 |
In the early 1970s, Hawking's work with Carter, Werner Israel, and David C. Robinson strongly supported Wheeler's no-hair theorem, one that states that no matter what the original material from which a black hole is created, it can be completely described by the properties of mass, electrical charge and rotation. His essay titled "Black Holes" won the Gravity Research Foundation Award in January 1971. Hawking's first book, "The Large Scale Structure of Space-Time," written with George Ellis, was published in 1973. Hawking continued his writings for a popular audience, publishing "The Universe in a Nutshell" in 2001, and "A Briefer History of Time", which he wrote in 2005 with Leonard Mlodinow to update his earlier works with the aim of making them accessible to a wider audience, and "God Created the Integers", which appeared in 2006. Along with Thomas Hertog at CERN and Jim Hartle, from 2006 on Hawking developed a theory of top-down cosmology, which says that the universe had not one unique initial state but many different ones, and therefore that it is inappropriate to formulate a theory that predicts the universe's current configuration from one particular initial state. Top-down cosmology posits that the present "selects" the past from a superposition of many possible histories. In doing so, the theory suggests a possible resolution of the fine-tuning question. Hawking was also a supporter of a universal basic income. He was critical of the Israeli government's position on the Israeli–Palestinian conflict, stating that their policy "is likely to lead to disaster." In 1988, Hawking, Arthur C. Clarke and Carl Sagan were interviewed in "God, the Universe and Everything Else". They discussed the Big Bang theory, God and the possibility of extraterrestrial life. Hawking used his fame to advertise products, including a wheelchair, National Savings, British Telecom, Specsavers, Egg Banking, and Go Compare. In 2015, he applied to trademark his name. The citation continues, "Other important work by Hawking relates to the interpretation of cosmological observations and to the design of gravitational wave detectors."
|
Earth
|
https://en.wikipedia.org/wiki?curid=9228
| 27,553 |
The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800-year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation. Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth. The upper atmosphere, the atmosphere above the troposphere, is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as above Earth's surface, is a working definition for the boundary between the atmosphere and outer space. </math>, where "m" is the mass of Earth, "a" is an astronomical unit, and "M" is the mass of the Sun. So the radius in AU is about formula_1.</ref>
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,048 |
JavaScript (), often abbreviated as JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. As of 2022, 98% of websites use JavaScript on the client side for webpage behavior, often incorporating third-party libraries. All major web browsers have a dedicated JavaScript engine to execute the code on users' devices. JavaScript is a high-level, often just-in-time compiled language that conforms to the ECMAScript standard. It has dynamic typing, prototype-based object-orientation, and first-class functions. It is multi-paradigm, supporting event-driven, functional, and imperative programming styles. It has application programming interfaces (APIs) for working with text, dates, regular expressions, standard data structures, and the Document Object Model (DOM). The ECMAScript standard does not include any input/output (I/O), such as networking, storage, or graphics facilities. In practice, the web browser or other runtime system provides JavaScript APIs for I/O. JavaScript engines were originally used only in web browsers, but are now core components of some servers and a variety of applications. The most popular runtime system for this usage is Node.js. Although Java and JavaScript are similar in name, syntax, and respective standard libraries, the two languages are distinct and differ greatly in design. The first popular web browser with a graphical user interface, Mosaic, was released in 1993. Accessible to non-technical people, it played a prominent role in the rapid growth of the nascent World Wide Web. The lead developers of Mosaic then founded the Netscape corporation, which released a more polished browser, Netscape Navigator, in 1994. This quickly became the most-used. During these formative years of the Web, web pages could only be static, lacking the capability for dynamic behavior after the page was loaded in the browser. There was a desire in the flourishing web development scene to remove this limitation, so in 1995, Netscape decided to add a scripting language to Navigator. They pursued two routes to achieve this: collaborating with Sun Microsystems to embed the Java programming language, while also hiring Brendan Eich to embed the Scheme language. Netscape management soon decided that the best option was for Eich to devise a new language, with syntax similar to Java and less like Scheme or other extant scripting languages. Although the new language and its interpreter implementation were called LiveScript when first shipped as part of a Navigator beta in September 1995, the name was changed to JavaScript for the official release in December.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,056 |
The choice of the JavaScript name has caused confusion, implying that it is directly related to Java. At the time, the dot-com boom had begun and Java was the hot new language, so Eich considered the JavaScript name a marketing ploy by Netscape. Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. On the JavaScript front, Microsoft reverse-engineered the Navigator interpreter to create its own, called JScript. JScript was first released in 1996, alongside initial support for CSS and extensions to HTML. Each of these implementations was noticeably different from their counterparts in Navigator. These differences made it difficult for developers to make their websites work well in both browsers, leading to widespread use of "best viewed in Netscape" and "best viewed in Internet Explorer" logos for several years. In November 1996, Netscape submitted JavaScript to Ecma International, as the starting point for a standard specification that all browser vendors could conform to. This led to the official release of the first ECMAScript language specification in June 1997. The standards process continued for a few years, with the release of ECMAScript 2 in June 1998 and ECMAScript 3 in December 1999. Work on ECMAScript 4 began in 2000. Meanwhile, Microsoft gained an increasingly dominant position in the browser market. By the early 2000s, Internet Explorer's market share reached 95%. This meant that JScript became the de facto standard for client-side scripting on the Web. Microsoft initially participated in the standards process and implemented some proposals in its JScript language, but eventually it stopped collaborating on Ecma work. Thus ECMAScript 4 was mothballed. During the period of Internet Explorer dominance in the early 2000s, client-side scripting was stagnant. This started to change in 2004, when the successor of Netscape, Mozilla, released the Firefox browser. Firefox was well received by many, taking significant market share from Internet Explorer. In 2005, Mozilla joined ECMA International, and work started on the ECMAScript for XML (E4X) standard. This led to Mozilla working jointly with Macromedia (later acquired by Adobe Systems), who were implementing E4X in their ActionScript 3 language, which was based on an ECMAScript 4 draft. The goal became standardizing ActionScript 3 as the new ECMAScript 4. To this end, Adobe Systems released the Tamarin implementation as an open source project. However, Tamarin and ActionScript 3 were too different from established client-side scripting, and without cooperation from Microsoft, ECMAScript 4 never reached fruition.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,065 |
Meanwhile, very important developments were occurring in open-source communities not affiliated with ECMA work. In 2005, Jesse James Garrett released a white paper in which he coined the term Ajax and described a set of technologies, of which JavaScript was the backbone, to create web applications where data can be loaded in the background, avoiding the need for full page reloads. This sparked a renaissance period of JavaScript, spearheaded by open-source libraries and the communities that formed around them. Many new libraries were created, including jQuery, Prototype, Dojo Toolkit, and MooTools. Google debuted its Chrome browser in 2008, with the V8 JavaScript engine that was faster than its competition. The key innovation was just-in-time compilation (JIT), so other browser vendors needed to overhaul their engines for JIT. In July 2008, these disparate parties came together for a conference in Oslo. This led to the eventual agreement in early 2009 to combine all relevant work and drive the language forward. The result was the ECMAScript 5 standard, released in December 2009. Ambitious work on the language continued for several years, culminating in an extensive collection of additions and refinements being formalized with the publication of ECMAScript 6 in 2015. The creation of Node.js in 2009 by Ryan Dahl sparked a significant increase in the usage of JavaScript outside of web browsers. Node combines the V8 engine, an event loop, and I/O APIs, thereby providing a stand-alone JavaScript runtime system. As of 2018, Node had been used by millions of developers, and npm had the most modules of any package manager in the world. The ECMAScript draft specification is currently maintained openly on GitHub, and editions are produced via regular annual snapshots. Potential revisions to the language are vetted through a comprehensive proposal process. Now, instead of edition numbers, developers check the status of upcoming features individually. The current JavaScript ecosystem has many libraries and frameworks, established programming practices, and substantial usage of JavaScript outside of web browsers. Plus, with the rise of single-page applications and other JavaScript-heavy websites, several transpilers have been created to aid the development process. "JavaScript" is a trademark of Oracle Corporation in the United States. The trademark was originally issued to Sun Microsystems on 6 May 1997, and was transferred to Oracle when they acquired Sun in 2010. JavaScript is the dominant client-side scripting language of the Web, with 98% of all websites using it for this purpose. Scripts are embedded in or included from HTML documents and interact with the DOM. All major web browsers have a built-in JavaScript engine that executes the code on the user's device.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,074 |
Over 80% of websites use a third-party JavaScript library or web framework for their client-side scripting. jQuery is by far the most popular library, used by over 75% of websites. Facebook created the React library for its website and later released it as open source; other sites, including Twitter, now use it. Likewise, the Angular framework created by Google for its websites, including YouTube and Gmail, is now an open source project used by others. In contrast, the term "Vanilla JS" has been coined for websites not using any libraries or frameworks, instead relying entirely on standard JavaScript functionality. The use of JavaScript has expanded beyond its web browser roots. JavaScript engines are now embedded in a variety of other software systems, both for server-side website deployments and non-browser applications. Initial attempts at promoting server-side JavaScript usage were Netscape Enterprise Server and Microsoft's Internet Information Services, but they were small niches. Server-side usage eventually started to grow in the late 2000s, with the creation of Node.js and other approaches. Electron, Cordova, React Native, and other application frameworks have been used to create many applications with behavior implemented in JavaScript. Other non-browser applications include Adobe Acrobat support for scripting PDF documents and GNOME Shell extensions written in JavaScript. The following features are common to all conforming ECMAScript implementations unless explicitly specified otherwise. JavaScript supports much of the structured programming syntax from C (e.g., codice_1 statements, codice_2 loops, codice_3 statements, codice_4 loops, etc.). One partial exception is scoping: originally JavaScript only had function scoping with codice_5; block scoping was added in ECMAScript 2015 with the keywords codice_6 and codice_7. Like C, JavaScript makes a distinction between expressions and statements. One syntactic difference from C is automatic semicolon insertion, which allow semicolons (which terminate statements) to be omitted. JavaScript is weakly typed, which means certain types are implicitly cast depending on the operation used. Values are cast to numbers by casting to strings and then casting the strings to numbers. These processes can be modified by defining codice_15 and codice_16 functions on the prototype for string and number casting respectively. JavaScript has received criticism for the way it implements these conversions as the complexity of the rules can be mistaken for inconsistency. For example, when adding a number to a string, the number will be cast to a string before performing concatenation, but when subtracting a number from a string, the string is cast to a number before performing subtraction.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,085 |
Often also mentioned is codice_17 resulting in codice_18 (number). This is misleading: the codice_19 is interpreted as an empty code block instead of an empty object, and the empty array is cast to a number by the remaining unary codice_8 operator. If you wrap the expression in parentheses codice_21 the curly brackets are interpreted as an empty object and the result of the expression is codice_22 as expected. In JavaScript, an object is an associative array, augmented with a prototype (see below); each key provides the name for an object property, and there are two syntactical ways to specify such a name: dot notation (codice_24) and bracket notation (codice_25). A property may be added, rebound, or deleted at run-time. Most properties of an object (and any property that belongs to an object's prototype inheritance chain) can be enumerated using a codice_26 loop. JavaScript functions are first-class; a function is considered to be an object. As such, a function may have properties and methods, such as codice_36 and codice_37. A "nested" function is a function defined within another function. It is created each time the outer function is invoked. In addition, each nested function forms a lexical closure: the lexical scope of the outer function (including any constant, local variable, or argument value) becomes part of the internal state of each inner function object, even after execution of the outer function concludes. JavaScript also supports anonymous functions. Variables in JavaScript can be defined using either the codice_5, codice_6 or codice_7 keywords. Variables defined without keywords will be defined at the global scope. There is no built-in Input/output functionality in JavaScript, instead it is provided by the run-time environment. The ECMAScript specification in edition 5.1 mentions that "there are no provisions in this specification for input of external data or output of computed results". However, most runtime environments have a codice_50 object that can be used to print output. Here is a minimalist Hello World program in JavaScript in a runtime environment with a console object: // After setting this, the tag will look like this: `<span class="foo" id="bar" data-attr="baz"></span>`
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,092 |
myElem.setAttribute('data-attr', 'baz'); // Which could also be written as `myElem.dataset.attr = 'baz'` // Elements can be imperatively grabbed with querySelector for one element, or querySelectorAll for multiple elements that can be looped with forEach document.querySelectorAll('.multiple'); // Returns an Array of all elements with the "multiple" class This example shows that, in JavaScript, function closures capture their non-local variables by reference. Arrow functions were first introduced in 6th Edition - ECMAScript 2015. They shorten the syntax for writing functions in JavaScript. Arrow functions are anonymous, so a variable is needed to refer to them in order to invoke them after their creation, unless surrounded by parenthesis and executed immediately. // An arrow function, like other function definitions, can be executed in the same statement as they are created. const five_multiples = generate_multiplier_function(5); // The supplied argument "seeds" the expression and is retained by a. Immediately-invoked function expressions are often used to create closures. Closures allow gathering properties and methods in a namespace and making some of them private: Generator objects (in the form of generator functions) provide a function which can be called, exited, and re-entered while maintaining internal context (statefulness). }).sort((a, b) => a.lcm() - b.lcm()) // sort with this comparative function; => is a shorthand form of a function, called "arrow function" JavaScript and the DOM provide the potential for malicious authors to deliver scripts to run on a client computer via the Web. Browser authors minimize this risk using two restrictions. First, scripts run in a sandbox in which they can only perform Web-related actions, not general-purpose programming tasks like creating files. Second, scripts are constrained by the same-origin policy: scripts from one Web site do not have access to information such as usernames, passwords, or cookies sent to another site. Most JavaScript-related security bugs are breaches of either the same origin policy or the sandbox. There are subsets of general JavaScript—ADsafe, Secure ECMAScript (SES)—that provide greater levels of security, especially on code created by third parties (such as advertisements). Closure Toolkit is another project for safe embedding and isolation of third-party JavaScript and HTML.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,104 |
Content Security Policy is the main intended method of ensuring that only trusted code is executed on a Web page. A common JavaScript-related security problem is cross-site scripting (XSS), a violation of the same-origin policy. XSS vulnerabilities occur when an attacker can cause a target Web site, such as an online banking website, to include a malicious script in the webpage presented to a victim. The script in this example can then access the banking application with the privileges of the victim, potentially disclosing secret information or transferring money without the victim's authorization. A solution to XSS vulnerabilities is to use "HTML escaping" whenever displaying untrusted data. Some browsers include partial protection against "reflected" XSS attacks, in which the attacker provides a URL including malicious script. However, even users of those browsers are vulnerable to other XSS attacks, such as those where the malicious code is stored in a database. Only correct design of Web applications on the server-side can fully prevent XSS. Another cross-site vulnerability is cross-site request forgery (CSRF). In CSRF, code on an attacker's site tricks the victim's browser into taking actions the user did not intend at a target site (like transferring money at a bank). When target sites rely solely on cookies for request authentication, requests originating from code on the attacker's site can carry the same valid login credentials of the initiating user. In general, the solution to CSRF is to require an authentication value in a hidden form field, and not only in the cookies, to authenticate any request that might have lasting effects. Checking the HTTP Referrer header can also help. "JavaScript hijacking" is a type of CSRF attack in which a codice_42 tag on an attacker's site exploits a page on the victim's site that returns private information such as JSON or JavaScript. Possible solutions include: Developers of client-server applications must recognize that untrusted clients may be under the control of attackers. The application author cannot assume that their JavaScript code will run as intended (or at all) because any secret embedded in the code could be extracted by a determined adversary. Some implications are: Package management systems such as npm and Bower are popular with JavaScript developers. Such systems allow a developer to easily manage their program's dependencies upon other developers' program libraries. Developers trust that the maintainers of the libraries will keep them secure and up to date, but that is not always the case. A vulnerability has emerged because of this blind trust. Relied-upon libraries can have new releases that cause bugs or vulnerabilities to appear in all programs that rely upon the libraries. Inversely, a library can go unpatched with known vulnerabilities out in the wild. In a study done looking over a sample of 133,000 websites, researchers found 37% of the websites included a library with at least one known vulnerability. "The median lag between the oldest library version used on each website and the newest available version of that library is 1,177 days in ALEXA, and development of some libraries still in active use ceased years ago." Another possibility is that the maintainer of a library may remove the library entirely. This occurred in March 2016 when Azer Koçulu removed his repository from npm. This caused tens of thousands of programs and websites depending upon his libraries to break.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,111 |
JavaScript provides an interface to a wide range of browser capabilities, some of which may have flaws such as buffer overflows. These flaws can allow attackers to write scripts that would run any code they wish on the user's system. This code is not by any means limited to another JavaScript application. For example, a buffer overrun exploit can allow an attacker to gain access to the operating system's API with superuser privileges. Plugins, such as video players, Adobe Flash, and the wide range of ActiveX controls enabled by default in Microsoft Internet Explorer, may also have flaws exploitable via JavaScript (such flaws have been exploited in the past). In Windows Vista, Microsoft has attempted to contain the risks of bugs such as buffer overflows by running the Internet Explorer process with limited privileges. Google Chrome similarly confines its page renderers to their own "sandbox". Web browsers are capable of running JavaScript outside the sandbox, with the privileges necessary to, for example, create or delete files. Such privileges are not intended to be granted to code from the Web. Incorrectly granting privileges to JavaScript from the Web has played a role in vulnerabilities in both Internet Explorer and Firefox. In Windows XP Service Pack 2, Microsoft demoted JScript's privileges in Internet Explorer. Microsoft Windows allows JavaScript source files on a computer's hard drive to be launched as general-purpose, non-sandboxed programs (see: Windows Script Host). This makes JavaScript (like VBScript) a theoretically viable vector for a Trojan horse, although JavaScript Trojan horses are uncommon in practice. In 2015, a JavaScript-based proof-of-concept implementation of a rowhammer attack was described in a paper by security researchers. In 2017, a JavaScript-based attack via browser was demonstrated that could bypass ASLR. It's called "ASLR⊕Cache" or AnC. In 2018, the paper that announced the Spectre attacks against Speculative Execution in Intel and other processors included a JavaScript implementation. A common misconception is that JavaScript is the same as Java. Both indeed have a C-like syntax (the C language being their most immediate common ancestor language). They are also typically sandboxed (when used inside a browser), and JavaScript was designed with Java's syntax and standard library in mind. In particular, all Java keywords were reserved in original JavaScript, JavaScript's standard library follows Java's naming conventions, and JavaScript's and objects are based on classes from Java 1.0.
|
JavaScript
|
https://en.wikipedia.org/wiki?curid=9845
| 28,121 |
Java and JavaScript both first appeared in 1995, but Java was developed by James Gosling of Sun Microsystems and JavaScript by Brendan Eich of Netscape Communications. The differences between the two languages are more prominent than their similarities. Java has static typing, while JavaScript's typing is dynamic. Java is loaded from compiled bytecode, while JavaScript is loaded as human-readable source code. Java's objects are class-based, while JavaScript's are prototype-based. Finally, Java did not support functional programming until Java 8, while JavaScript has done so from the beginning, being influenced by Scheme. JSON, or JavaScript Object Notation, is a general-purpose data interchange format that is defined as a subset of JavaScript's object literal syntax. TypeScript (TS) is a strictly-typed variant of JavaScript. TS differs by introducing type annotations to variables and functions, and introducing a type language to describe the types within JS. Otherwise TS shares much the same featureset as JS, to allow it to be easily transpiled to JS for running client-side, and to interoperate with other JS code. Since 2017, web browsers have supported WebAssembly, a binary format that enables a JavaScript engine to execute performance-critical portions of web page scripts close to native speed. WebAssembly code runs in the same sandbox as regular JavaScript code. JavaScript is the dominant client-side language of the Web, and many websites are script-heavy. Thus transpilers have been created to convert code written in other languages, which can aid the development process.
|
Tesla, Inc.
|
https://en.wikipedia.org/wiki?curid=5533631
| 32,337 |
In October 2022, the company announced it would deliver its first all-electric semitrailer truck for PepsiCo in December, in what would be the first new model the company would give to consumers since the beginning of 2020, when it started delivering the Model Y crossover. At the time of the announcement, the trucks would support PepsiCo plants in Sacramento and Modesto, California. In April 2019, Musk announced Tesla's intention to launch an autonomous taxi service by the end of 2020 using more than 1 million Tesla vehicles. A year later, in April 2020, Musk stated Tesla would not make the end of 2020 deadline but said, "we'll have the functionality necessary for full self-driving by the end of the year." Tesla Energy's generation products include solar panels (built by other companies for Tesla), the Tesla Solar Roof (a solar shingle system) and the Tesla Solar Inverter. Other products include the Powerwall (a home energy storage device) and the Powerpack and Megapack, which are large-scale energy storage systems. Tesla Energy also develops software to allow customers to monitor and control their systems. Destination chargers are installed free of charge by Tesla-certified contractors; the locations originally were required to provide the electricity at no cost to their customers. , locations with six or more destination chargers may start charging for electricity. All destination chargers appear in the in-car navigation system. Panasonic is Tesla's supplier of cells in the United States, and cooperates with Tesla in producing cylindrical 2170 batteries at Giga Nevada. In January 2021, Panasonic had the capacity to produce 39 GWh per year of batteries at Giga Nevada. Tesla's battery cells used in Giga Shanghai are supplied by Panasonic and Contemporary Amperex Technology (CATL), and are the more traditional prismatic (rectangular) cells used by other automakers. In 2018, a class action was filed against Musk and the members of Tesla's board alleging they breached their fiduciary duties by approving Musk's stock-based compensation plan. Musk received the first portion of his stock options payout, worth more than $700 million in May 2020. The quarter ending June 2021 was the first time Tesla made a profit independent of Bitcoin and regulatory credits.
|
Microsoft
|
https://en.wikipedia.org/wiki?curid=19001
| 34,565 |
Microsoft Corporation is an American multinational technology corporation producing computer software, consumer electronics, personal computers, and related services headquartered at the Microsoft Redmond campus located in Redmond, Washington, United States. Its best-known software products are the Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. Microsoft ranked No. 21 in the 2020 Fortune 500 rankings of the largest United States corporations by total revenue; it was the world's largest software maker by revenue as of 2019. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Apple, and Meta. Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Windows. The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011. , Microsoft is market-dominant in the IBM PC compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android. The company also produces a wide range of other consumer and enterprise software for desktops, laptops, tabs, gadgets, and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure), and software development (Visual Studio). Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This unfolded with Microsoft acquiring Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers, and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division. Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999.
|
Microsoft
|
https://en.wikipedia.org/wiki?curid=19001
| 34,569 |
Earlier dethroned by Apple in 2010, in 2018 Microsoft reclaimed its position as the most valuable publicly traded company in the world. In April 2019, Microsoft reached the market cap, becoming the third U.S. public company to be valued at over $1 trillion after Apple and Amazon respectively. , Microsoft has the fourth-highest global brand valuation. Microsoft has been criticized for its monopolistic practices and the company's software has been criticized for problems with ease of use, robustness, and security. Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer programming. In 1972, they founded Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. Gates enrolled at Harvard University while Allen pursued a degree in computer science at Washington State University, though he later dropped out to work at Honeywell. The January 1975 issue of "Popular Electronics" featured Micro Instrumentation and Telemetry Systems's (MITS) Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. Gates called MITS and claimed that he had a working interpreter, and MITS requested a demonstration. Allen worked on a simulator for the Altair while Gates developed the interpreter, and it worked flawlessly when they demonstrated it to MITS in March 1975 in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as CEO, and Allen suggested the name "Micro-Soft", short for micro-computer software. In August 1977, the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office of ASCII Microsoft. Microsoft moved its headquarters to Bellevue, Washington, in January 1979. Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xenix, but it was MS-DOS that solidified the company's dominance. IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS to be used in the IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products which it branded as MS-DOS, although IBM rebranded it to IBM PC DOS. Microsoft retained ownership of MS-DOS following the release of the IBM PC in August 1981. IBM had copyrighted the IBM PC BIOS, so other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Microsoft eventually became the leading PC operating systems vendor. The company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press.
|
Microsoft
|
https://en.wikipedia.org/wiki?curid=19001
| 34,573 |
Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's lymphoma. Allen claimed in "Idea Man: A Memoir by the Co-founder of Microsoft" that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he did not think that he was working hard enough. Allen later invested in low-tech sectors, sports teams, commercial real estate, neuroscience, private space flight, and more. Microsoft released Windows on November 20, 1985, as a graphical extension for MS-DOS, despite having begun jointly developing OS/2 with IBM the previous August. Microsoft moved its headquarters from Bellevue to Redmond, Washington, on February 26, 1986, and went public on March 13, with the resulting rise in stock making an estimated four billionaires and 12,000 millionaires from Microsoft employees. Microsoft released its version of OS/2 to original equipment manufacturers (OEMs) on April 2, 1987. In 1990, the Federal Trade Commission examined Microsoft for possible collusion due to the partnership with IBM, marking the beginning of more than a decade of legal clashes with the government. Meanwhile, the company was at work on Microsoft Windows NT, which was heavily based on their copy of the OS/2 code. It shipped on July 21, 1993, with a new modular kernel and the 32-bit Win32 application programming interface (API), making it easier to port from 16-bit (MS-DOS-based) Windows. Microsoft informed IBM of Windows NT, and the OS/2 partnership deteriorated. In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as Microsoft Word and Microsoft Excel. On May 22, Microsoft launched Windows 3.0, featuring streamlined user interface graphics and improved protected mode capability for the Intel 386 processor, and both Office and Windows became dominant in their respective areas. On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statement that said: "Beginning in 1988 and continuing until July 15, 1994, Microsoft induced many OEMs to execute anti-competitive 'per processor licenses. Under a per-processor license, an OEM pays Microsoft a royalty for each computer it sells containing a particular microprocessor, whether the OEM sells the computer with a Microsoft operating system or a non-Microsoft operating system. In effect, the royalty payment to Microsoft when no Microsoft product is being used acts as a penalty, or tax, on the OEM's use of a competing PC operating system. Since 1988, Microsoft's use of per processor licenses has increased."
|
Microsoft
|
https://en.wikipedia.org/wiki?curid=19001
| 34,577 |
Following Bill Gates' internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefine its offerings and expand its product line into computer networking and the World Wide Web. With a few exceptions of new companies, like Netscape, Microsoft was the only major and established company that acted fast enough to be a part of the World Wide Web practically from the start. Other companies like Borland, WordPerfect, Novell, IBM and Lotus, being much slower to adapt to the new situation, would give Microsoft a market dominance. The company released Windows 95 on August 24, 1995, featuring pre-emptive multitasking, a completely new user interface with a novel start button, and 32-bit compatibility; similar to NT, it provided the Win32 API. Windows 95 came bundled with the online service MSN, which was at first intended to be a competitor to the Internet, and (for OEMs) Internet Explorer, a Web browser. Internet Explorer has not bundled with the retail Windows 95 boxes, because the boxes were printed before the team finished the Web browser, and instead were included in the Windows 95 Plus! pack. Backed by a high-profile marketing campaign and what "The New York Times" called "the splashiest, most frenzied, most expensive introduction of a computer product in the industry's history," Windows 95 quickly became a success. Branching out into new markets in 1996, Microsoft and General Electric's NBC unit created a new 24/7 cable news channel, MSNBC. Microsoft created Windows CE 1.0, a new OS designed for devices with low memory and other constraints, such as personal digital assistants. In October 1997, the Justice Department filed a motion in the Federal District Court, stating that Microsoft violated an agreement signed in 1994 and asked the court to stop the bundling of Internet Explorer with Windows. On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer, an old college friend of Gates and employee of the company since 1980, while creating a new position for himself as Chief Software Architect. Various companies including Microsoft formed the Trusted Computing Platform Alliance in October 1999 to (among other things) increase security and protect intellectual property through identifying changes in hardware and software. Critics decried the alliance as a way to enforce indiscriminate restrictions over how consumers use software, and over how computers behave, and as a form of digital rights management: for example, the scenario where a computer is not only secured for its owner but also secured against its owner as well. On April 3, 2000, a judgment was handed down in the case of "United States v. Microsoft Corp.", calling the company an "abusive monopoly." Microsoft later settled with the U.S. Department of Justice in 2004. On October 25, 2001, Microsoft released Windows XP, unifying the mainstream and NT lines of OS under the NT codebase. The company released the Xbox later that year, entering the video game console market dominated by Sony and Nintendo. In March 2004 the European Union brought antitrust legal action against the company, citing it abused its dominance with the Windows OS, resulting in a judgment of €497 million ($613 million) and requiring Microsoft to produce new versions of Windows XP without Windows Media Player: Windows XP Home Edition N and Windows XP Professional N. In November 2005, the company's second video game console, the Xbox 360, was released. There were two versions, a basic version for $299.99 and a deluxe version for $399.99.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.