اصطکاک قابل تنظیم

بیش‌ترِ ماشین‌آلات، از چرخ‌های واگن گرفته تا موتورهای ماشین‌های پرسرعت، بدون روغن‌کاری و روان‌سازی به سرعت از کارکردن بازمی‌ایستند. علت اصطکاک است، پدیده‌ای بسیار پیچیده که سطوحِ در حالِ تماس را گرم کرده و به آن‌ها آسیب می‌رساند. همه‌ی روش‌های سنتی، از شیوه‌ی روغن‌کاری برای کاهش و کنترلِ میزانِ اصطکاک بهره می‌گیرند. این روش‌ها منفعل بوده و هیچ‌گاه کاملاً رضایت‌بخش نیستند. درحالی‌که Carlos Drummond از مرکزِ CNRS Centre de Recherche Paul Pascal در فرانسه در گزارشی به Physical Review Letters اعلام کرده که روش‌های پیش‌رفته‌ای وجود دارند که با به‌کارگیری میدان‌های الکتریکی، امکان تنظیم نیروهای اصطکاکی را به صورتِ فعال فراهم می‌کنند.

Drummond  کارِ خود را با توجه به این نکته آغاز کرد که میدان‌های الکتریکی، ساختارِ مولکول‌های دارای بار الکتریکی را دگرگون می‌کنند. او توانست با به‌کارگیریِ مولکول‌هایی که به صورتِ خودبه‌خودی جذبِ سطوح می‌شوند، لایه‌ای از پلیمرهای باردار که پلی‌الکترولیت نامیده می‌شوند را، روی ورقه‌های صیقلیِ میکا رسوب دهد. این مولکول‌ها روی سطح «فرچه‌های پلیمری» می‌سازند چون به سطح چسبیده و همانندِ موهای ریز روی مسواک، از سطح بالا می‌روند. هنگامی‌که دو سطحِ میکا که به این ترتیب ساخته شده‌اند روی یک‌دیگر قرار گیرند، نیروی اصطکاک میان آن‌ها به چگونگیِ فرورفتنِ این فرچه‌ها در هم بستگی دارد. Drummond  با اتصالِ الکترودها به پشتِ دو صفحه‌ی میکا، میدانی الکتریکی برقرار کرد و به این ترتیب نشان داد که ساختارِ پلیمرها در اثرِ اعمالِ میدانِ الکتریکی چنان تغییر می‌کند که میزانِ اصطکاکِ میانِ این دو سطح، به طور چشم‌گیری کاهش می‌یابد. چون سامانه‌ها بسیار سریع به میدانِ الکتریکی پاسخ‌ می‌دهند و نیز ساختِ سیگنال‌های الکتریکی بسیار سرراست و آسان است، Drummond پیش‌بینی می‌کند که اثرِ دیده شده، در بسیاری از کاربردها به‌کار آید، از کاهشِ ساییدگی در اندام‌های مصنوعی (پروتزها) تا کنترلِ بی‌درنگِ حسِ لامسه. می‌توان کاربریِ کامپیوتر را برای کسانی که دچارِ نقص در بینایی هستند، چنان ترتیب داد که اصطکاکِ ناشی از لمس کردن، موجبِ جابه‌جاییِ داده‌ها شود.

Planck's law violated at the nanoscale

In a new experiment, a silica fibre just 500 nm across has been shown not to obey Planck's law of radiation. Instead, say the Austrian physicists who carried out the work, the fibre heats and cools according to a more general theory that considers thermal radiation as a fundamentally bulk phenomenon. The work might lead to more efficient incandescent lamps and could improve our understanding of the Earth's changing climate, argue the researchers.

A cornerstone of thermodynamics, Planck's law describes how the energy density at different wavelengths of the electromagnetic radiation emitted by a "black body" varies according to the temperature of the body. It was formulated by German physicist Max Planck at the beginning of the 20th century using the concept of energy quantization that was to go on and serve as the basis for quantum mechanics. While a black body is an idealized, perfectly emitting and absorbing object, the law does provide very accurate predictions for the radiation spectra of real objects once those objects' surface properties, such as colour and roughness, are taken into account.

However, physicists have known for many decades that the law does not apply to objects with dimensions that are smaller than the wavelength of thermal radiation. Planck assumed that all radiation striking a black body will be absorbed at the surface of that body, which implies that the surface is also a perfect emitter. But if the object is not thick enough, the incoming radiation can leak out from the far side of the object instead of being absorbed, which in turn lowers its emission.

Spectral anomalies spotted before

Other research groups had previously shown that miniature objects do not behave as Planck predicted. For example, in 2009 Chris Regan and colleagues at the University of California, Los Angeles reported that they had found anomalies in the spectrum of radiation emitted by a carbon nanotube just 100 atoms wide.

In this latest work, Christian Wuttke and Arno Rauschenbeutel of the Vienna University of Technology have gone one better by showing experimentally that the emission from a tiny object matches the predictions of an alternative theory.

To produce the 500-nm thick fibre they used in their experiment, Wuttke and Rauschenbeutel heated and pulled a standard optical fibre. They then heated the ultra-thin section, which was a few millimetres long, by shining a laser beam through it and used another laser to measure the rate of heating and subsequent cooling. Bounced between two mirrors integrated into the fibre a fixed distance apart, this second laser beam cycled into and out of resonance as the changing temperature varied the fibre's refractive index and hence the wavelength of radiation passing through it.

Fluctuational electrodynamics

By measuring the time between resonances, the researchers found the fibre to be heating and cooling much more slowly than predicted by the Stefan–Boltzmann law. This law is a consequence of Planck's law and defines how the total power radiated by an object is related to its temperature. Instead, they found the observed rate to be a very close match to that predicted by a theory known as fluctuational electrodynamics, which takes into account not only a body's surface properties, but also its size and shape plus its characteristic absorption length. "We are the first to measure total radiated power and show quantitatively that it agrees with model predictions," says Wuttke.

According to Wuttke, the latest work could have practical applications. For example, he says that it might lead to an increase in the efficiency of traditional incandescent light bulbs. Such devices generate light because they are heated to the point where the peak of their emission spectrum lies close to visible wavelengths, but they waste a lot of energy because much of their power is still emitted at infrared wavelengths. Comparing a 500nm-thick light-bulb filament with a very short antenna, Wuttke explains that it would not be thick enough to efficiently generate infrared radiation, which has wavelengths above about 700 nm, therefore suppressing emission at these wavelengths and enhancing emission at shorter visible wavelengths. He points out, however, that glass fibre, while ideal for the laboratory, would be a poor candidate for everyday use, since it is an insulator and is transparent to visible light. "A lot of research would be needed to find a material that conducts electricity and is easily heated, while capable of being made small enough and in large quantities," he says.

Atmospheric applications

The research might also improve understanding of how small particles in the atmosphere, such as those produced by soil erosion, combustion or volcanic eruptions, contribute to climate change. Such particles might cool the Earth, by reflecting incoming solar radiation, or warm the Earth, by absorbing the thermal radiation from our planet, as greenhouse gases do. "The beauty of fluctuational electrodynamics", says Wuttke, "is that just by knowing the shape and absorption characteristics of the material you can work out from first principles how efficiently and at which wavelengths it is absorbing and emitting thermal radiation." But, he adds, here too more work would be needed to apply the research to real atmospheric conditions.

One thing that Wuttke and Rauschenbeutel are sure of, however, is that their research does not undermine quantum mechanics. Planck's theory, explains Rauschenbeutel, is limited by the assumption that absorption and emission are purely surface phenomena and by the omission of wave phenomena. His principle of the quantization of energy, on the other hand, is still valid. "The theory we have tested uses quantum statistics," he says, "so it is not in contradiction with quantum mechanics. Quite the opposite, in fact."

Regan describes the latest work as "very elegant", predicting that it will "illuminate new features of radiative thermal transport and Planck's law at the nanoscale". He suggests, however, that using an emissivity model that incorporates the transparency of the thin optical fibres would allow Planck's law to more accurately describe the radiation from these tiny emitters.

A Casimir force for life

The Casimir effect is perhaps best known as a quantum phenomenon, in which vacuum fluctuations can give rise to an attractive force between two parallel mirrors. But there is also a thermodynamic equivalent, caused by fluctuations in the composition of a fluid close to its critical point. New research by physicists in the US suggests that these "critical Casimir" forces act on the proteins inside cellular membranes, allowing proteins to communicate with one another and stimulating cells' responses to allergens such as pollen.

All cells are surrounded by a membrane that controls the flow of substances into and out of the organism. Membranes are made up of molecules called lipids within which proteins are embedded. They were once thought to be essentially uniform, but a number of experiments starting in the 1970s and 1980s indicated that the lipids in fact cluster to form distinct structures tens or hundreds of times larger than the lipid molecules themselves. Scientists did not understand, however, where the energy needed to maintain such structures came from.

In 2008 biophysicist Sarah Veatch at Cornell University in upstate New York and colleagues found a solution. It was known that above 25 °C membranes isolated from live mammal cells exist in a single liquid phase, whereas below that temperature they separate out into two distinct phases, composed of different kinds of lipids and proteins – a bit like oil and water refusing to mix when brought together. What Veatch's group discovered was that as they lowered the temperature of the membranes close to that at which the phases separate out, known as the critical point, small fluctuating patches of the second phase started to appear. Such fluctuations – which measured several microns across and were visible in an optical microscope – do not require large amounts of energy to form.

Critical look at criticality

Veatch has since moved to the University of Michigan but for the current research teamed up with two physicists back at Cornell, Benjamin Machta and James Sethna, to understand the purpose of this criticality. The researchers reckoned that certain kinds of proteins are attracted to one of the phases while other kinds are attracted to the second phase, so tending to draw like proteins together and separate out unlike proteins. As Veatch explains, these interacting proteins would form "signalling cascades" to transmit information regarding the identity of compounds in a cell's vicinity from receptor proteins in the membrane to the inside of the cell. Such information could be used, for example, to decide whether it is a good time to divide or whether it is safe to crawl towards food. "We think that one reason cell membranes contain critical fluctuations is to help facilitate some of the early steps in these signalling pathways," she says.

To calculate the strength and form of the Casimir forces between proteins, Machta used mathematics developed originally for string theory. He found that, as expected, the forces are attractive for like proteins and repulsive for unlike ones, and that they yield a potential energy several times that of the proteins' thermal energy, over distances of tens of nanometres. Much stronger electrostatic interactions, he explains, are limited to ranges of about a nanometre by the screening effects of ions inside the cell. "We have found that by tuning close to criticality, cells have arranged for a long-ranged force to act between proteins," he says.

Sethna adds a broader perspective. "It is amazing how many reactions in cells all involve energies of the same size as thermal fluctuations," he says. "We think that it is the cell being economical – why pay more?"

Something to sneeze at

The researchers suspect that the existence of these critical Casimir forces explains why cells low on cholesterol do not function as they should – the removal of the cholesterol, they reckon, taking the membrane away from its critical point. They also speculate that the forces are involved in the sneezing process. Sethna explains that when the receptor proteins in immune cells detect an allergen such as pollen they cluster together, and this clustering somehow triggers the histamines that cause sneezing. He says that perhaps an allergen simply changes the preference of the receptor proteins for one of the two liquid phases in the membrane, hence drawing them together.

The team is hopeful that its work could lead to medical applications. Veatch explains that defects in lipids are thought to contribute to a large number of diseases, including cancer, auto-immunity diseases, and inflammation. "This work may shed light on how lipids could impact some aspects of these diseases," she says. "In the future, I can imagine drugs that specifically target lipids to regulate interactions between proteins in order to treat human disease."

Sethna adds, however, that the time scale for such applications is likely to be long. "Our work is more like figuring out how to make better concrete to build the subbasement of the skyscraper that eventually would house the penthouse of health applications," he says.

Theory explains behaviour

But in addition to any future applications, Sethna argues that the existence of criticality within cells lessens the reliance on purely evolutionary mechanisms when trying to understand how cells operate. "There are lots of things about cells that biologists assume happen because 'evolution made it so'," he says. "Here, I guess, evolution allowed the cell to find this critical point. But once the cell is at the critical point, we can use systematic, cool theory to explain lots of the behaviour, without repeatedly accounting for everything using evolution."

However, some independent experts feel that Veatch's experimental results must be treated with caution because they were not obtained using intact cells. One, who asked to remain anonymous, argues that the separation of the membrane from the rest of the cell might have removed certain relevant components from the membrane and that the body of the cell itself might influence the critical fluctuations in some way. "I am not yet convinced that the theory presented is applicable to in vivo biological membranes," he says. "I therefore think that much more experimental work has to be done to investigate this phenomenon."

The research is described in Physical Review Letters.

موفقیت گرانش نیوتنی اصلاح‌شده در توضیح کهکشان‌های بیضوی

نظریه‌ای که برای جایگزینی ماده تاریک ارایه شده، ویژگی‌های دورانی دو کهکشان بیوضی را با موفقیت پیش‌بینی کرده است. این کار با استفاده از نظریه دینامیک نیوتنی اصلاح شده(MOND) توسط مردهای میلگرام انجام شد که اولین بار خودش 30 سال پیش این نظریه را پیشنهاد داد. با نشان دادن این که MOND را می‌توان برای توضیح ویژگی کهکشان های بیضوی پیچیده – علاوه بر کهکشان های مارپیچی بسیار ساده تر – به کار برد، میلگرام MOND را جایگزینی با ارزش برای ماده تاریک می‌داند زیرا خواص عجیب کهکشان‌ها را شرح می‌دهد.

ماده تاریک در 1933 پیشنهاد شد تا توضیح دهد چرا کهکشان‌های حاضر خوشه‌های خاصی، سریع‌تر از حالتی حرکت می‌کنند که از «ماده باریونی» قابل مشاهده آن‌ها انتظار داریم. چند دهه بعد، رفتارهای مشابهی در کهکشان‌های منفرد دیده شد: سرعت دورانی ستاره‌های خارجی به شکل تابعی از فاصله «افت نمی‌کرد»، بلکه ثابت می‌ماند. این مشاهدات مستقیما با گرانش نیوتنی در تضاد بودند؛ گرانش نیوتنی همان قدر که روی زمین یا در منظومه شمسی معتبر هستند باید در ناحیه برون کهکشانی نیز معتبر بماند. اما اگر فرض کنیم که «هاله‌ای» از ماده نامرئی در(یا حدود) ساختارهای کهکشانی وجود دارد، قانون معکوس مجذوری نیوتن بازگردانده می‌شود.

 فیزیکدانان برای توضیح این بی‌نظمی‌های کهکشانی در ابتدا تلاش کردند تا اندازه گیری‌های مستقیمی از ماده تاریک انجام دهند تا دقیقا چیستی آن را دریابند که البته با موفقیت بسیار کمی همراه بود. در نتیجه، پژوهشگرانی وجود دارند که باور به وجود ماده تاریک ندارند و توضیح‌های جایگزینی برای رفتار عجیب کهکشان‌ها پیشنهاد داده‌اند.

 موفقیت چشمگیر

اکنون تحلیل جدید نشان می‌دهد که یکی از نظریات جایگزین به نام MOND، ویژگی‌های دو کهکشان بیضوی را به خوبی ماده تاریک توصیف می‌کند. MOND اصولا برای توصیف کهکشان‌های مارپیچی ارایه شد و موفقیت عظیمی در پیش‌بینی بعضی ویژگی‌های این ساختارها داشت. تعمیم آن به توضیح کهکشان‌های بیضوی می‌تواند، استدلال‌ها را به نفع این نظریه تغییر دهد. به همین دلیل، پیش‌بینی شده کهکشان‌های بیضوی توسط فرایندهای متفاوتی با کهکشان‌های مارپیچی ساخته شده‌اند و دشواری محاسبه ویژگی‎هایشان بسیار بیشتر است.

MOND در ابتدا توسط احترفیزیکدان موسسه وایزمن، مردهای میلگرام در سال 1983 پیشنهاد شد. فرض پایه نظریه این است که در شتاب‌های بسیار کوچک کمتر از 10-10 متر بر مجذور ثانیه، قانون دوم نیوتن برقرار نیست. در عوض، فرمول نیوتن اصلاح شده به نحوی است که تحت شرایطی خاص، نیروی گرانش بین دو جسم آرامتر از قانون عکس مجذوری افت می‌کند.

قابل پیش بینی است نظریه ای که قانون نیوتن را تغییر می دهد با نقدهای بسیاری روبرو شود و MOND استثنا نبود. با این همه، این نظریه جاذبه‌های انکارناپذیری داشت: همچون پیش‌بینی‌های قابل آزمون و متکی نبودن بر ماده تاریک نادیده. در سال 2004، ژاکوب بکنشتین نسخه ای از MOND ارایه داد که با نظریه نسبیت عام انشیتین سازگار است و همین توجه بیشتر جامعه فیزیک را برانگیخت.

بی نیاز از ماده تاریک؟!
میلگرام در پژوهش جدید خود، هیدرواستاتیک پوشی کروی از گاز داغ و گسیل کننده اشعه اکس را در دو کهکشان بیضوی تحلیل می‌کند و نشان می‌دهد که پیش‌بینی‌های MOND معتبر هستند. میلگرام می‌گوید این امر از این رو اهمیت دارد که تصور می‌شود تحول کهکشان‌های بیضوی کاملا با کهکشان‌های مارپیچی و کهکشان‌های دیسکی متفاوت است. فکر می‌شود که کهکشان‌های بیضوی از برخورد و ادغام دو کهکشان شکل می‌گیرند. موفقیت MOND بدین معنی است که دقت پیش بینی آن نمی‌تواند تصادفی باشد و همین، نشانه‌ای از واقعیتی ژرف تر است.

استفاده از یک قانون ریاضی برای پیش بینی سرعت دورانی دو نوع کهکشان با دو نحوه شکل‌گیری، واقعیتی است که فرضیه ماده تاریک را رد می‌کند. او می‌گوید:«در تصویر ماده تاریک، کهکشان‌هایی که امروزه می‌بینیم نتیجه نهایی فرایندهای شکل‌گیری بسیار بغرنج و اتفاقی هستند. شما با کهکشان‌های کوچک آغاز می‌کنید. آن‌ها ادغام می‌شوند، برخورد‌می کنند و انفجارهایی در کهکشان روی می‌دهد. در طی این تحول طوفانی، ماده تاریک و ماده معمولی به شکل‌های بسیاری متفاوتی تحت این فرایندها قرار می‌گیرند و بنابراین شما واقعا انتظار ندارید که همبستگی واقعی بین ماده تاریک و ماده معمولی مشاهده کنید. این نقطه ضعف مهمی برای تصویر ماده تاریک است.»

دن هوپر، اخترفیزیکدان و متخصص ماده تاریک از فرمی لب، می‌گوید که MOND با نشان دادن قابلیت‌های خود در کهکشان‌ها، از نقدها سربلند بیرون نخواهد آمد؛ حتی اگر کهکشان‌هایی باشند که قبلا آزموده نشده‌اند. او می‌افزاید:«من فهمیده‌ام که MOND عملکرد بسیاری خوبی در توصیف دینامیک کهکشان‌ها دارد و این مقاله نیز مثال دیگری از موفقیت MOND در مقیاس کهکشانی است. با این حال MOND در مقیاس‌های بزرگتری همچون خوشه‌های کهکشانی یا مقیاس های کیهانشناسی بزرگتر باز می ماند.» او ناهمسانگردی تابش زمینه کیهان را مثالی از آن ها می‌داند.

لينك مقاله در وبلاگ

نوبل فیزیک 2012 برای اطلاعات کوانتومی

جایزه نوبل فیزیک سال 2012 به سِرژ هارُش و دیوید واینلند به خاطر کار بر روی کنترل سامانه‌های کوانتومی اهدا شد. به نقل از فرهنگستان سلطنتی علوم سوئد، این جایزه به دلیل «روش‌های تجربی نوینی» به هارُش و واینلند داده می‌شود که «اندازه‌گیری و تغییر سامانه های‌کوانتومی منفرد را میسر می‌سازند.» هارُش شهروند فرانسوی است و در کالج دوفرانس پاریس کار می‌کند. واینلند شهروند آمریکا است و در موسسه ملی استانداردها و فناوری در بولدر، کلرادو فعالیت می‌کند. فرهنگستان سلطنتی علوم سوئد در اطلاعیه‌ای آورده است: «سرژ هارُش و دیوید واینلند مستقلا روش‌هایی برای اندازه‌گیری و تغییر ذرات منفرد ایجاد کرده‌اند؛ روش هایی که سرشت کوانتومی [ذرات] را حفظ می‌کنند.»

پیشتاز CQED

هارُش در سال 1944 در کازابلانکا مراکش متولد شد و در سال 1971 از دانشگاه پیر و ماری کوری درجه دکترا را دریافت کرد. او نیمی از این جایزه را به خاطر حوزه جدیدی کسب کرد که الکترودینامیک کوانتومی کاواکی(CQED) خوانده می‌شود. در این نظریه، با قرار دادن اتم در یک کاواک اپتیکی یا میکروویو خواص آن کنترل می‌شود. هارُش بر آزمایش های میکروویو تمرکز کرد و این روش را به روشی تبدیل کرد که خواص تک فوتون‌ها کنترل می‌کند.

در مجموعه‌ای از آزمایش‌های برجسته، هارُش از CQED استفاده‌ کرد تا آزمایش مشهور گربه شرودینگر را مطالعه کند. در این آزمایش، یک سامانه در برهم‌نهی از دو حالت کوانتومی بسیار متفاوت قرار داد و تا زمانی که اندازه‌گیری روی آن سامانه صورت نگیرد، این برهم‌نهی حفظ می‌شود. چنین حالت‌هایی بسیار شکننده هستند و روش های ایجادشده در CQED برای اندازه‌گیری حالت‌های کوانتومی، در توسعه رایانه‌های کوانتومی کاربرد دارد.

استاد کنترل یون

دیوید واینلند در سال 1944 در میلواکی ویسکانزین متولد شد و دکترای خود را در سال 1970 از دانشگاه هاروارد اخذ کرد. وی علاوه بر رهبری گروهی در موسسه ملی استانداردها و فناوری (NIST)، با دانشگاه کلرادو-بولدر همکاری می‌کند.

او جایزه نوبل را به خاطر کارهای انقلابیش در زمینه کنترل کوانتومی یون‌ها به دست آورده است. یکی از موفقیت‌های او ساخت و انتقال یک یون در حالت گربه شرودینگر، با استفاده از روش دام‌اندازی ارایه شده در NIST است. دام‌های یونی با استفاده از میدان‌های الکتریکی به دقت کنترل شده، در خلا بسیار بالا ساخته می‌شوند. این دام‌ها می‌توانند یک یا چند یون را در کنار هم به دام بیندازند.

گیراندازی یون

با دام‌اندازی یون‌ها، آن‌ها نوسان می‌کنند. برای سردسازی این یون‌ها تا پایین‌ترین حالت انرژی(انرژی حالت پایه) باید این انرژی نوسانی را از آن‌ها گرفت. واینلند برای این کار روشی مبتنی بر لیزر ساخت که بسته انرژی نوسانی را از یون حذف می‌کند. این روش «باند جانبی» یون‌ها را در برهم‌نهی از حالت‌ها (گربه شرودینگر) قرار می‌دهد. واینلند از روش‌های کنترل یون برای ساخت ساعت‌های اپتیکی فوق دقیق و مدارهای رایانه‌های کوانتومی استفاده نمود.

اطلاعات کوانتومی

رینر بلت از دانشگاه اینسبروک اتریش که آزمایش‌هایی در هر دو زمینه CQED و دام‌اندازی یون انجام داده، انتخاب‌های کمیته نوبل را تایید می‌کند. بلت اشاره می‌کند که این دو نفر روش‌های مشابهی ساخته‌اند که در سامانه‌های فیزیکی، کاربردهای مختلفی دارند. روش‌هایی که پایه بسیاری از سامانه‌های اطلاعات کوانتومی نوظهور است.

بلت، کار سال 2008 واینلند در مورد «طیف‌نگاری منطق کوانتومی» را یادآوری می‌کند که در آن یک تک یون به عنوان ساعت اپتیکی استفاده شود و همچنین ساخت قطعه ای کوچک در 2009 که همه توابع موردنیاز در پردازش کوانتوم بزرگ یونی را فراهم می کند.

کار هارُش چهارچوبی است برای کنترل برهمکنش بین یک تک اتم و یک فوتون. چیزی که بلت می گوید در حال حاضر برای تبادل اطلاعات کوانتومی بین اتم‌ها و فوتون‌ها استفاده می‌شود. این کار فیزیکدانان را قادر می‌کند تا رایانه‌های کوانتومی بسازند که داده‌ها در بیت های کوانتومی(کیوبیت) ایستایی ذخیره شوند که در زمان‌های نسبتا طولانی پایدار هستند. همچنین، داده‌ها را از طریق فوتون‌ها می‌توان بین اتم‌ها منتقل کرد به صورتی که حالت کوانتومی آن‌ها ضمن سفر در فواصل نسبتا طولانی حفظ شود.

Dark-matter alternative tackles elliptical galaxies

An alternative theory to dark matter has successfully predicted the rotational properties of two elliptical galaxies. The work was done in Israel by Mordehai Milgrom using the modified Newtonian dynamics (MOND) theory that he first developed nearly 30 years ago. By showing that MOND can be used to explain the properties of complicated elliptical galaxies – as well the much simpler spiral galaxies – Milgrom argues that MOND offers a viable alternative to dark matter when it comes to explaining the bizarre properties of galaxies.

Dark matter was proposed in 1933 to explain why galaxies in certain clusters move faster than would be possible if they contained only the "baryonic" matter that we can see. A few decades later, similar behaviour was detected in individual galaxies, whereby the rotational velocity of the outermost stars was found not to "drop off" as a function of distance but instead remain flat. These observations directly contradicted Newtonian gravity, which should hold true in extragalactic regions just as it does on Earth and in the solar system. But by assuming there are "haloes" of invisible matter in and around galactic structures, Newton's familiar inverse square law is restored.

Since it was first invoked to explain these galactic irregularities, physicists have tried to make direct measurements on dark matter to try to work out exactly what it is – with very little success. As a result, there are some researchers who do not believe that dark matter exists and have proposed alternative explanations for the strange behaviour of galaxies.

Spectacular success

Now a new analysis suggests that one alternative theory called MOND describes the properties of two elliptical galaxies just as well as dark matter. MOND was originally formulated to describe spiral galaxies and has had spectacular success in predicting certain properties of these structures. Its extension to cover elliptical galaxies could strengthen the arguments in favour of this alternative theory. This is because elliptical galaxies are predicted to have formed by a different process from spiral galaxies and their properties are much more difficult to calculate.

MOND was first proposed in 1983 by the astrophysicist Mordehai Milgrom of the Weizman institute in Israel. The basic premise of the theory is that at extremely small accelerations of less than 10–10 m s–2 Newton's second law does not hold. Instead, Milgorm modified Newton's formula so that under certain circumstances the gravitational force between two bodies decays more gently than the inverse square of the distance between them.

Predictably, a theory that advocates changing Newton's laws is destined to meet with widespread scepticism, and MOND is no exception. Nevertheless, it also has undeniable attractions, such as the ease with which it makes testable predictions and the fact that it does not rely on an as-yet unseen dark matter. And, since a version of MOND consistent with Einstein's general theory of relativity was derived in 2004 by Jacob Bekenstein of the Hebrew University of Jerusalem, the wider physics community has begun to take notice.

No coincidence

In the new research, Milgrom analyses the hydrostatics of a spherical envelope of hot, X-ray emitting gas in two elliptical galaxies and shows the predictions of MOND are equally valid in these. This is important, Milgrom argues, because elliptical galaxies are thought to have evolved in a completely different way from spiral galaxies and other disc galaxies – they are thought to be formed by the collision and merging of two other galaxies. MOND's success, he argues, means that its predictive accuracy cannot simply be a coincidence and that it must hint at a deeper underlying truth.

He also suggests that the fact that the same mathematical law can be used to predict the rotation speeds of two different types of galaxies formed in two different ways significantly undermines the dark-matter hypothesis. "In the dark-matter picture" he says, "The galaxies we see today are the end result of very complicated and very haphazard formation processes. You start with small galaxies – they merge, they collide – there are explosions in the galaxies and so on and so forth. During this stormy evolution the dark matter and the normal matter are subject to these processes in very different ways and so you really do not expect to see any real correlations between the dark matter and the normal matter. This is a very weak point of the dark-matter picture."

Particle astrophysicist and dark-matter expert Dan Hooper of Fermilab in the US, argues that MOND will not win over sceptics by showing its applicability to galaxies, even if those galaxies are of types that have not been previously tested. "I have found it to be the case for quite some time now that MOND does a very good job of explaining the dynamics of galaxies," he says. "And this paper is yet another example of where MOND succeeds at the galactic scale. Where MOND fails is on larger scales such as in clusters of galaxies and on even larger cosmological scales." He cites the anisotropy of the cosmic-microwave background as one example of this.

 

لينك اصلي مقاله

Dark-matter hope fades in microwave haze

The latest results from the Planck space telescope have confirmed the presence of a microwave haze at the centre of the Milky Way. However, the haze appears to be more elongated than originally thought, which casts doubt over previous claims that annihilating dark matter is the cause of the emissions. A roughly spherical haze of radiation at the heart of our galaxy was identified as far back as 2004 by the Wilkinson Microwave Anisotropy Probe (WMAP). Since then, some astrophysicists have suggested that this haze is produced by annihilating dark-matter particles.

However, some researchers have questioned whether the haze actually exists at all, suggesting that it could be an artefact of how the WMAP data were analysed. Doubts were raised as to whether WMAP was capable of picking out this weak signal buried deep in emissions from galactic dust, the cosmic microwave background (CMB) and other noise from hectic regions of the galaxy.

It is definitely there

The argument now seems to have been settled by the latest results from Planck, a European Space Agency mission launched in May 2009. "Crudely speaking, we agree with all the WMAP results," explains Krzysztof Gorski of NASA's Jet Propulsion Laboratory in California, who is a member of the Planck team. "Planck is more sensitive, and has a greater frequency range, taking us into a realm that WMAP couldn't even see," he told physicsworld.com. One of the telescope's main objectives is to accurately map fluctuations in the CMB, so it is well suited to subtracting that radiation to reveal the haze.

With the presence of the haze independently verified, focus has returned to determining its origin. After its original discovery, some researchers, including Dan Hooper of Fermilab near Chicago, US, argued that annihilating dark matter could explain the galactic haze. Dark matter has long been thought to bind galaxies together, but detecting it directly has remained elusive. In Hooper's mechanism, dark-matter particles annihilate to produce conventional electrons and positrons. These particles then spiral around the Milky Way's magnetic field to produce the radiation we see as the microwave haze.

However, as well as confirming its existence, Planck was also able to reveal details of the shape of the haze. "The new results seem to suggest that the haze is elongated rather than spherical [as previously thought]," explains Hooper, who was not involved in the Planck research. "Simulations suggest that we would expect to find dark-matter halos that are roughly spherically symmetric," he adds. There might still be room for a partial dark-matter explanation, however. "Our opinion is that no single current model explains the haze's origin," admits Gorski. So Hooper is not giving up. "It still smells like dark matter to me," he says.

Related to Fermi bubbles?

The Planck observations also revealed a sharp southern edge to the haze. This implies that the formation mechanism is sporadic – if it were continuous, then the edges of the haze would appear diffuse. "The sharpness also implies that the haze might be related to the Fermi bubbles," says Hooper. The Fermi bubbles are two giant, gamma-ray-emitting structures extending 25,000 light-years above and below the centre of the galaxy. Spotted by the Fermi space telescope in November 2010, these bubbles also have sharp, defined edges pointing towards a rapid release of energy as their cause, rather than a continuous, steady process.

It is possible, then, that the two phenomena have a common origin. "There may be some mechanism crossover between the haze and the bubbles," says Andrew Pontzen, a theoretical cosmologist at the University of Oxford in the UK. "The next step would be to see exactly how much overlap there is in the data," he adds. Any areas where the two phenomena do not overlap still leaves the door open for dark matter to play a part. "Maybe the cause [of the haze] is a mixture of dark-matter annihilation and other mechanisms," Hooper adds.

Whichever explanation turns out to be correct, the Planck results have focused the argument. "Observationally, this is a great step forward," Pontzen says. "However, the centre of the galaxy remains an intrinsically complicated place where a plethora of strange things are going on," he adds. In the end, it might take Planck's successors to settle the debate.

 

لینک مقاله منتشر شده

کارگاه یک روزه اپتیک

Pulsar timekeepers measure up to atomic clocks

An international team of astronomers has come up with a new way of keeping track of time by observing a collection of pulsars – rapidly rotating stars that emit radio pulses at very regular intervals. Although the ultimate goal of the research is to use pulsar timing to detect gravitational waves, the group has shown that the pulsar-based timescale can also be used to reveal inconsistencies in timescales based on atomic clocks.

Pulsars are neutron stars that rotate at very high speeds and appear to emit radio pulses at extremely regular intervals. The pulses are actually all we see of a radio beam that is focused by the star's magnetic field and swept around like a lighthouse beacon. Using a radio telescope, astronomers can measure the arrival times of successive pulses to a precision of 100 ns over a measurement time of about an hour. While this level of precision is significantly less than that offered by an atomic clock, pulsars could in principle be used to create timescales that are stable for decades, centuries or longer. This could be useful for identifying fluctuations in Earth-based timekeepers such as atomic or optical clocks, which normally do not operate over such long periods.

The team, which is led by George Hobbs at CSIRO Astronomy and Space Sciences in Australia, looked at data from the Parkes Pulsar Timing Array (PPTA) project. Using the Parkes radio telescope in Australia, the project aims to use a set of about 20 pulsars in different parts of the Milky Way to detect gravitational waves. The idea is that when a gravitational wave passes through our galaxy, its presence warps space/time such that the millisecond gaps between the pulses arriving from various pulsars are affected in a very specific way.

Extremely precise timescale

In developing the PPTA, Hobbs and colleagues in Australia, Germany, the US and China realized that the timing data from a number of pulsars could be combined to create an extremely precise timescale stretching back to the mid-1990s. A timescale is a sequence of marks in time, each separated by a defined time interval. The most precise timescales available today are generated by atomic or optical clocks, which operate using the frequencies of certain atomic transitions.

The team made a timescale based on 19 pulsars by first correcting the data from each pulsar for a number of different things that can affect the measurement of the gap between pulses. These include instrumental effects, the motion of the Earth within the solar system and the effects of interstellar plasma. Also, the frequency of a pulsar drops slowly with time as rotational energy is radiated away, and this must be corrected for.

The team then combined the data from the 19 pulsars to create the Terrestrial Time PPTA11 or TT(PPTA11) timescale, where 11 signifies that the most recent data used are from 2011. To show how their new timescale could be used to evaluate timescales generated by atomic clocks, the researchers compared it with Terrestrial Time (International Atomic Time) – TT(TAI). This is a timescale that is created by combining the results of several hundred atomic clocks worldwide. TT(TAI) is never revized, and therefore provides a historical record of the performance of atomic clocks. Instead, the atomic-clock timescale is gently "steered" towards better timekeeping through revision and reanalysis of the time standard.

Looking for deficiencies

If the new pulsar timescale is indeed precise, it should be able to reveal historical deficiencies in the atomic-clock timescale – and this is exactly what the team was able to do. The researchers compared the two timescales going back to about 1994 and found a distinct departure at around 1998. The team also did a similar comparison between the atomic-clock timescale and a corrected version of Terrestrial Time that is produced annually by the International Bureau of Weights and Measures – TT(BIPM11). The researchers saw the same distinct departure at around 1998, which suggested that, like TT(BIPM11), the pulsar-based timescale is capable of revealing inconsistencies in atomic-clock-based timescales.

The similarity between TT(PPTA11) and TT(BIPM11) also allowed the team to conclude that there are no large unexpected errors in TT(BIPM11). Furthermore, the results corroborate previous research, which concluded that the TT(TAI) timescale is not sufficiently precise to be used for pulsar-timing applications such as the detection of gravitational waves, and that TT(BIPM11) should always be used in such applications.

Team member David Champion at the Max Planck Institute for Radioastronomy in Bonn told physicsworld.com that the next step in developing the timescale is to incorporate pulsar data from other radio telescopes that were obtained over the same time period.

Proof of principle

Setnam Shemar of the Time and Frequency Group at the UK's National Physical Laboratory described the work as "proof of principle that PPTA data can be used to find anomalies in some present-day atomic timescales". While he thinks it is possible that a pulsar-based timescale could outperform the best present-day atomic timescale over long times, Shemar says that it is too early to tell. Indeed, he points out that if improvements in atomic and optical clock technologies outpace improvements in pulsar timing, as he expects to be the case, a pulsar-based timescale may in future be more useful in a search for gravitational waves than a means for checking atomic timescales.

 

لینک مقاله منتشر شده

کارگاه یک روزه آشنایی با فیزیک ذرات بنیادی

دعوت به همکاری

مدیریت وبلاگ در نظر دارد مقاله ای را جهت ارائه در کنفرانس سالیانه فیزیک که در شهریور ماه سال اینده برگزار خواهد شد آماده و ارائه نماید. بدین منظور از علاقه مندان به همکاری در این پروژه دعوت به همکاری میشود. لازم به ذکر است که نام تمامی همکاران در این پروژه به ترتیب کارها و محاسبات انجام شده در مقاله ذکر میشود.

عنوان در نظر گرفته شده برای شروع کار: ساختار فضا زمان در نظریه ریسمان

Space Time Structure in String Theory

 

قطعیت اصل عدم قطعیت

زمانی‌که دانش‌جویان برای اولین بار مکانیک کوانتومی می‌خوانند، درباره‌ی اصل عدم قطعیت هایزنبرگ چیزهایی می‌آموزند؛ بیش‌تر هم به‌شکلی معرفی می‌شود که گویی راجع به عدم‌قطعیتی ذاتی‌ست که هر دست‌گاه کوانتومی باید داشته‌باشد. درحالی‌که هایزنبرگ این اصل را به‌عنوان «اثر مشاهده‌گر» تدوین کرده‌است: رابطه‌ای میان دقت اندازه‌گیری و اخلال حاصل از آن؛ مانند وقتی که یک فوتون مکان یک الکترون را اندازه‌گیری می کند. گرچه شکل پیشین به دقت اثبات شده‌است، دیگری کم‌تر رایج است ـو همان‌طور که اخیرا نشان داده شده‌است‌- از نظر ریاضی اشتباه می‌باشد. لی رُزما و هم‌کاران از دانش‌گاه تورنتو، کانادا، در مقاله‌ای در Physical Review Letters، به صورت آزمایشگاهی نشان‌ داده‌اند که درواقع یک اندازه‌گیری می‌تواند رابطه‌ی اصلی دقت-اخلال هایزنبرگ را زیرپا بگذارد.

اگر یک مشاهده‌گر، روی مورد مشاهده‌شده تاثیر بگذارد، چه‌کسی می‌تواند اخلال حاصل از چنین اندازه‌گیری را اندازه‌ بگیرد؟ رزما یک فرآیند به نام اندازه‌گیری کوانتومی «ضعیف» را استفاده می‌کند: اگر کسی بتواند یک دست‌گاه کوانتومی را به‌وسیله‌ی حذف برهم‌کنش‌های ضعیف بررسی کند، می‌توان بدون اخلال یا با اندازه‌ی اندکی، داده‌هایی درمورد حالت اولیه  به‌دست آورد. نویسنده، این ره‌یافت را برای توصیف دقت و اخلال، در اندازه‌گیری قطبش‌ فوتون‌های در‌هم‌تافته به‌کاربرده‌است. با مقایسه‌ی حالت ابتدایی و انتهایی، دریافته‌اند که اخلال ایجاد شده‌ی این اندازه‌گیری کم‌تر از چیزی‌ست که رابطه‌ی دقت-اخلال هایزنبرگ پیش‌بینی می‌کند.
گرچه اندازه‌گیری‌های رزما اصل هایزنبرگ را به‌عنوان عدم‌قطعیت کوانتومی اساسی  دست‌نخورده می‌گذارد، مشکلات کاربردش در دقت اندازه‌گیری‌ها را آشکار می‌کند. این مشاهده‌ها نه‌تنها درجه‌ی دقتی که می‌توان با روش اندازه‌گیری‌های ضعیف بدان رسید را نشان می‌دهند، بلکه به بررسی بنیان‌های مکانیک کوانتومی نیز کمک می‌کنند.

 

لینک منبع (به نقل از انجمن فیزیک آمریکا)

Theory of Anything?

 String theory, which stretches back to the late 1960s, has become in the last 20 years the field of choice for up-and-coming physics researchers. Many of them hope it will deliver a "Theory of Everything"—the key to a few elegant equations that explain the workings of the entire universe, from quarks to galaxies.

Elegance is a term theorists apply to formulas, like E=mc2, which are simple and symmetrical yet have great scope and power. The concept has become so associated with string theory that Nova's three-hour 2003 series on the topic was titled The Elegant Universe.

Yet a demonstration of string theory's mathematical elegance was conspicuously absent from Nova's special effects and on-location shoots. No one explained any of the math onscreen. That's because compared to E=mc2, string theory equations look like spaghetti. And unfortunately for the aspirations of its proponents, the ideas are just as hard to explain in words. Let's give it a shot anyway, by retracing the 20th century's three big breakthroughs in understanding the universe.

Step 1: Relativity (1905-1915). Einstein's Special Theory of Relativity says matter and energy (E and m in the famous equation) are equivalent. His General Theory of Relativity says gravity is caused by the warping of space due to the presence of matter. In 1905, this seemed like opium-smoking nonsense. But Einstein's complex math (E=mc2 is the easy part) accurately predicted oddball behaviors in stars and galaxies that were later observed and confirmed by astronomers.

Step 2: Quantum mechanics (1900-1927). Relativistic math works wonderfully for predicting events at the galactic scale, but physicists found that subatomic particles don't obey the rules. Their behavior follows complex probability formulas rather than graceful high-school geometry. The results of particle physics experiments can't be determined exactly—you can only calculate the likeliness of each possible outcome.

Quantum's elegant equation is the Heisenberg uncertainty principle. It says the position (x) and momentum (p) of any one particle are never completely knowable at the same time. The closest you can get is a function related to Planck's constant (h), the theoretical minimum unit to which the universe can be quantized.

Einstein dismissed this probabilistic model of the universe with his famous quip, "God does not play dice." But just as Einstein's own theories were vindicated by real-world tests, he had to adjust his worldview when experimental results matched quantum's crazy predictions over and over again.

These two breakthroughs left scientists with one major problem. If relativity and quantum mechanics are both correct, they should work in agreement to model the Big Bang, the point 14 billion years ago at which the universe was at the same time super massive (where relativity works) and super small (where quantum math holds). Instead, the math breaks down. Einstein spent his last three decades unsuccessfully seeking a formula to reconcile it all—a Theory of Everything.

Step 3: String theory (1969-present). String theory proposes a solution that reconciles relativity and quantum mechanics. To get there, it requires two radical changes in our view of the universe. The first is easy: What we've presumed are subatomic particles are actually tiny vibrating strings of energy, each 100 billion billion times smaller than the protons at the nucleus of an atom.

That's easy to accept. But for the math to work, there also must be more physical dimensions to reality than the three of space and one of time that we can perceive. The most popular string models require 10 or 11 dimensions. What we perceive as solid matter is mathematically explainable as the three-dimensional manifestation of "strings" of elementary particles vibrating and dancing through multiple dimensions of reality, like shadows on a wall. In theory, these extra dimensions surround us and contain myriad parallel universes. Nova's "The Elegant Universe" used Matrix-like computer animation to convincingly visualize these hidden dimensions.

Sounds neat, huh—almost too neat? Krauss' book is subtitled The Mysterious Allure of Extra Dimensions as a polite way of saying String Theory Is for Suckers. String theory, he explains, has a catch: Unlike relativity and quantum mechanics, it can't be tested. That is, no one has been able to devise a feasible experiment for which string theory predicts measurable results any different from what the current wisdom already says would happen. Scientific Method 101 says that if you can't run a test that might disprove your theory, you can't claim it as fact. When I asked physicists like Nobel Prize-winner Frank Wilczek and string theory superstar Edward Witten for ideas about how to prove string theory, they typically began with scenarios like, "Let's say we had a particle accelerator the size of the Milky Way …" Wilczek said strings aren't a theory, but rather a search for a theory. Witten bluntly added, "We don't yet understand the core idea."

If stringers admit that they're only theorizing about a theory, why is Krauss going after them? He dances around the topic until the final page of his book, when he finally admits, "Perhaps I am oversensitive on this subject …” Then he slips into passive-voice scientist-speak. But here's what he's trying to say: No matter how elegant a theory is, it's a baloney sandwich until it survives real-world testing.

Krauss should know. He spent the 1980s proposing formulas that worked on a chalkboard but not in the lab. He finally made his name in the '90s when astronomers' observations confirmed his seemingly outlandish theory that most of the energy in the universe resides in empty space. Now Krauss' field of theoretical physics is overrun with theorists freed from the shackles of experimental proof. The string theorists blithely create mathematical models positing that the universe we observe is just one of an infinite number of possible universes that coexist in dimensions we can't perceive. And there's no way to prove them wrong in our lifetime. That's not a Theory of Everything, it's a Theory of Anything, sold with whizzy PBS special effects.

It's not just scientists like Krauss who stand to lose from this; it's all of us. Einstein's theories paved the way for nuclear power. Quantum mechanics spawned the transistor and the computer chip. What if 21st-century physicists refuse to deliver anything solid without a galaxy-sized accelerator? "String theory is textbook post-modernism fueled by irresponsible expenditures of money," Nobel Prize-winner Robert Laughlin griped to the San Francisco Chronicle earlier this year.

Krauss' book won't turn that tide. Hiding in the Mirror does a much better job of explaining string theory than discrediting it. Krauss knows he's right, but every time he comes close to the kill he stops to make nice with his colleagues. Last year, Krauss told a New York Times reporter that string theory was "a colossal failure." Now he writes that the Times quoted him "out of context." In spite of himself, he has internalized the postmodern jargon. Goodbye, Department of Physics. Hello, String Studies.