یادبودنامه آلبرت اینشتین؛ قسمت چهارم

به عقیده من،در این نکته جای تردید نیست که تفکر در بیشتر مواقع بدون استفاده از علائم(واژه ها) و فراتر از این، تا حد زیادی ناهشیارانه صورت می پذیرد. زیرا اگر چنین نیست پس از چه رو است که در برخی مواقع، کاملا بی اختیار از تجربه ای در «حیرت» می شویم. ظاهرا این «حیرت»زمانی دست می دهد که با تجربه ای روبرو  شویم که برخلاف دنیای مفاهیمی باشد که از پیش در ما قرار و ثبات کافی یافته است. هر وقت که چنین تعارضی، بسختی و شدت تمام تجربه شود،عکس العملی قاطع بر دنیای فکری ما بجا می گذارد. رشد و تحول این دنیای فکری به یک معنی، گریز مداوم از «حیرت» است.

 

برای مطالعه متن کامل این یادبودنامه اینجا را کلیک کنیـد. و همچنین جهت مشاهده قسمتهای گذشته این یادبودنامه اینجا را کلیک کنید.

به نقل از وبلاگ: علمی و تحقیقی

Laser heats up fusion quest

Physicists at the $3.5bn National Ignition Facility (NIF) say they have taken an important step in the bid to generate fusion energy using ultra-powerful lasers. By focusing NIF's 192 laser beams onto a tiny gold container, researchers have achieved the temperature and compression conditions that are needed for a self-sustaining fusion reaction – a milestone that they hope to pass next year.

Located at the Lawrence Livermore National Laboratory in California and officially opened last year, NIF will provide data for nuclear weapons testing as well as carry out fundamental research in astrophysics and plasma physics. The facility will also aim to fuse the hydrogen isotopes deuterium and tritium in order to demonstrate the feasibility of laser-based fusion for energy production.

These hydrogen isotopes will be contained within peppercorn-sized spheres of beryllium, which will be placed in the centre of an inch-long hollow gold cylinder – known as a hohlraum. By heating the inside of the hohlraum, NIF's laser beams will generate X-rays that cause the beryllium spheres to explode and, due to momentum conservation, the deuterium and tritium to rapidly compress. A shockwave from the explosion will then increase the temperature of the compressed matter to the point where the nuclei overcome their mutual repulsion and fuse.

One of the main aims of NIF is to achieve "ignition", which means that the fusion reactions generate enough heat to become self-sustaining. Researchers hope that by burning some 20–30% of the fuel inside each sphere the reactions will yield between 10 and 20 times as much energy as supplied by the lasers.

Hotter than the Sun

NIF first began testing the laser beams last year and now two groups at Lawrence Livermore have shown that they can obtain the desired conditions inside the hohlraum. They did this by using plastic spheres containing helium, rather than actual fuel pellets, since these were easier to analyse, and by combining their experimental measurements with computer simulations, the researchers found that the hohlraum converted nearly 90% of the laser energy into X-rays and that it heated up to some 3.6 million degrees Celsius. They also found that the sphere was compressed very uniformly, its diameter shrinking from around two millimetres to about a tenth of a millimetre.

"These results are better than we were hoping," says NIF boss Edward Moses. "People were concerned that we wouldn't be able to achieve the desired temperature and implosion shape, but those fears have proved unfounded." Moses says that the next step will be to replace the plastic spheres with beryllium ones containing unequal quantities of deuterium and tritium, in order to study how hydrodynamic stabilities might lead to asymmetrical implosions. The final step will then be to switch over to actual fuel pellets, which will contain equal quantities of the two hydrogen isotopes, and which, it is hoped, will ignite.

Moses says he hopes that ignition will take place in 2012. But he is keen not to raise expectations, having had to deal with many technical problems since construction started on NIF back in 1997. Indeed, he and his colleagues had predicted last January that ignition would be achieved by the end of 2010. "We might be able to reach ignition around spring or summertime next year," he says. "But there's a lot of physics that can run us off course in the meantime."

David Hammer, a plasma physicist at Cornell University in New York, says that the latest results are encouraging. However, he warns that the study was done without fully understanding the interactions taking place between the laser beams and plasma inside the hohlraum and that such interactions could wreck the very precise symmetry of the implosion needed for ignition.

The battle to find Maxwell's perfect image

To make a perfect lens – one that produces images at unlimited resolution – you need a very special material that exhibits "negative refraction". Or so researchers had thought.

Now scientists in the UK and Singapore have published experimental evidence that shows perfect lenses don't need negative refraction at all – and that a simpler solution lies in a 150 year-old design pioneered by James Maxwell. If true, the discovery could be a goldmine for the computer-chip industry, allowing electronic circuits to be made far more complex than those of today. However, the work is proving so controversial that the lead scientist has become embroiled in a fiery debate with other experts in the field.

The route to perfection

Until the turn of the century, perfect imaging was thought impossible. Light diffracts around features the same size as its wavelength, which should make it impossible for a lens to resolve details that are any smaller.

But in 2000 John Pendry of Imperial College London found a way to beat this "diffraction limit". He understood that, in addition to the light captured by normal lenses, an object always emits "near field" light that decays rapidly with distance. Near-field light conveys all an object's details, even those smaller than a wavelength, but no-one knew how to capture it.

Pendry's answer was negative refraction, a phenomenon that bends light in the opposite direction to a normal substance like glass. If someone could make such a negative-index material, he said, it would be able to reign in an object's near-field light, producing a perfect image.

It was a controversial prediction, but in 2004 researchers at the University of Toronto proved the sceptics wrong by creating a negative-index material and using it to focus radio waves beyond the diffraction limit. And that might have been where the story ended, except that negative-index lenses ultimately proved to be impractical for many applications. They absorb a lot of light, and only work within a wavelength's distance of the object.

Maxwell's fisheye

In 2009 Ulf Leonhardt of St Andrews University in the UK realized there may be another way forward. He had been examining a flat "fisheye" lens, first conceived by Maxwell in the mid 19th century, in which a unique refractive-index profile forces light rays to travel in circles, as though they were hugging the surface of an invisible sphere. Indeed, light rays emitted from an object anywhere on the flat surface would always meet at a point precisely opposite.

Leonhardt solved the standard equations of light propagation for the fisheye and came to a remarkable conclusion: all light, including the near field, is refocused at the image point as though it were travelling backwards in time to the source. In other words, said Leonhardt, the image would be the object's exact, perfect, replica.

The fisheye did have a slight problem. For one half of the lens, the change in refractive index implied light would have to travel faster than it does in a vacuum – a known impossibility. Leonhardt's solution again played on the symmetry: replace that half with a mirror, he said, so the semicircles of light on that side are simply reflected from the other.

But like Pendry's nine years before, the prediction met fast resistance. Within two months, Richard Blaikie of the University of Canterbury in New Zealand published a response claiming that any enhanced focusing would not be intrinsic to the fisheye, but an artefact left by having a "drain" where the image is. "An everyday example I can think about is a lightning rod, which concentrates electric fields around its sharp tip," says Blaikie. "Leonhardt and others somehow confuse this natural (and very well understood) field concentration with imaging."

A perfect illusion?

The drain, which is essentially a detector, was mentioned as a requirement in Leonhardt's paper. Yet he admits that he wasn't initially aware of its role – that it captures the image before the light continues on its circular track back to the source. "It turns out that perfect imaging is only possible when the image is detected; the perfect image appears, but only if one looks...Purists may call it an artefact, but if the 'artefact' creates a perfect image, it's a useful feature."

Others, including Pendry, were not convinced: at least five other papers have been published arguing against Leonhardt's prediction. However, in a paper published today in the New Journal of Physics, Leonhardt and colleagues from St Andrews and the National University of Singapore claim "unambiguous" proof that they have beaten the diffraction limit with the fisheye for microwaves.

In its experiment, Leonhardt's group forms the fisheye's varying refractive index profile with concentric rings of copper, surrounded by a mirror. Microwaves enter on one side from a pair of cables just one-fifth of a wavelength apart, and travel across the rings to a bank of 10 cables, functioning as drains (see "Leonhardt's fisheye").

Crucially, the researchers show that the signal arriving at the bank is not smoothed out over all the cables, as would be the case in a normal, diffraction-limited lens. Instead, only those two drains precisely opposite the two cables register strong signals (see "Perfect evidence?"). For Leonhardt, this is proof of imaging beyond the diffraction limit, and the basis of a perfect lens. "The behaviour cannot be explained as an artefact of the drains," he adds, "because otherwise all 10 drains would register intensity spikes."

Clone wars

Yet despite this demonstration, all authors of the original arguments against Leonhardt's prediction told physicsworld.com that they are not convinced. Pendry believes the lens works only when the drains are a "clone" of the source, so that the near-field light is tricked into reappearing. "If the clone is removed, resolution degrades and is limited by wavelength as in a normal lens," he says. The need of a clone would make the lens useless for imaging features that are too small to see.

Leonhardt disagrees. His drain cables were half the length of the source cables, so were not clones, he says. Indeed, he believes that it would be possible to repeat the experiment for visible light, with photographic film recording the perfect image. And he has supporters: Matti Lassas, a mathematician at the University of Helsinki, Finland, thinks Leonhardt has answered his critics' arguments convincingly. The ideas are "true breakthroughs in transformation optics", Lassas says.

"It will take time and more experiments," says Leonhardt, "but I'm sure in the end even the most hard-nosed critics will be convinced that it works. Maybe they need to see a perfect photograph of fine structures that are otherwise impossible to see. Seeing is believing, but then it will be too late for the sceptics to be ahead of the game."

درگذشت استاد گرانقدر، سرکارخانم دکتر آلینوش طریان

با نهایت تأثر و تأسف، درگذشت استاد گرانقدر، سرکارخانم دکتــر آلینوش طریان، استاد فرهیخته دانشگاه تهران، نخستین استاد فیزیک زن در دانشگاه‌های ایران را به جامعه علمی کشور تسلیت می‌گوییم.


ایشان در 25 بهمن ماه 1299 در تهران پا به عرصه وجود گذاشت. در خرداد سال 1326 با درجه لیسانس فیزیک از دانشکده علوم دانشگاه تهران فارغ التحصیل می‌شود و در مهر ماه همان سال به سمت کارمند آزمایشگاه فیزیک دانشکده علوم استخدام می‌شود. حدود یکسال بعد به سمت متصدی عملیات آزمایشگاه فیزیک دانشکده علوم منصوب می شوند. سپس در سال 1328 به تشویق پدر و هزینه شخصی به بخش فیزیک اتمسفر دانشگاه پاریس در کشور فرانسه می روند و سرانجام دانشنامه دکترای دولتی خود را از همین دانشگاه در سال 1956 میلادی (1335 شمسی) دریافت می‌کنند و به خاطر عشق و علاقه به میهمنش همان سال به کشور برمی‌گردد و با سمت دانشیار فیزیک مجدد در گروه فیزیک دانشگاه تهران مشغول به کار می‌شوند. در سال 1338 دولت فدرال آلمان غربی بورس مطالعه در آبزرواتوآر فیزیک خورشید در اختیار دانشگاه تهران قرار می‌گیرد و ایشان برای این بورسیه اعزام می‌شوند. در خرداد 1343 به مقام استادی ارتقاء پیدا می کنند، که در واقع ایشان اولین زن فیزیکدان است که در ایران به مقام استادی می‌رسد. از سال 1345 به بعد در راه اندازی و بنیانگذاری رصدخانه و تلسکوپ خورشیدی نقش عمده‌ای ایفا می‌کنند. ایشان اولین کسی است که درس فیزیک ستاره ها را در دانشگاههای کشور تدریس کرد و سرانجام در سال 1358 پس از 32 سال خدمت صادقانه بنا به تقاضای شخصی به بازنشستگی نائل می شود و در 14 اسفند ماه 1389 دیده از جهان فرو می‌بندد. روحش شاد.


پیکرایشان در روز دوشنبه 16/12/1389 از کلیسای مریم مقدس واقع در خیابان جمهوری تشییع شد. روحش شــاد.

فیزیکدانان آنسامبل اسپینی را مونتاژ کردند

کامپیوتر های کوانتومی داده ها را به عنوان بیت هایی با ارزش صفر یا یک ذخیره می کنند که محاسبات کوانتومی آن ها به عنوان کوبیت ذخیره می شود به طوری که در یک زمان می تواند بیش از یک مقدار را به خود اختصاص دهد. کوبیت ها در واقع حالات کوانتومی هستند که در فوتون ها با دیگر ذرات گیر افتاده امکان انتقال اطلاعات را به صورت آنی صرف نظر از فاصله ی جدایی آن ها فراهم می کنند.

نتیجتا کامپیوتر های کوانتومی به صورت بالقوه امکان پردازش و ذخیره ی اطلاعات بسیار زیاد با سرعتی بی سابقه  را داراست که با استفاده از آن می توان به حل مسایل مهم از قبیل شبیه سازی فرآیند های بیولوژیکی پیچیده و یا پدیده های عجیب دنیای کوانتوم رسید.

از جمله ی ای این رویکرد ها در  محاسبات کوانتومی تخدیر سیلیکون با آلودگی است که می تواند الکترون های تکتایی را به سیلیکون بدهد. در این روش اطلاعات کوانتومی هم در حالت اسپینی دوالکترون و هم در هسته ی پذبرنده ی الکترون ذخیره می شود و نهایتا این هسته ها و ذرات می توانند جمع شوند و جفت  کوبیت را تشکیل دهند. مزیت بزرگ این روش در این است که سیلیکون قبلا به صورت چندین بار در ایجاد پردازش هایی در همان مکان استفاده شده است. 

 

سازگاری فوق العاده

در حال حاضر استفان سایمونز در دانشگاه آکسفورد و یک گروه بین المللی نشان داده اند که می توان با تولید کوبیت هایی بوسیله ی تغلیظ بلور سیلیکون با اتم های فسفر دار به این روش رسید.سایمونز و همکارانش با سرد کردن ماده تا 3 کلوین و قرار دادن آن در پالس های مایکروویو و رادیویی توانستند 10 به توان 10 الکترون گیر افتاده با هسته های فسفر دار در چیزی که آن ها آنسامبل اسپینی می نامند ایجاد کنند.( Entanglement) گیر افتادگی با سازگاری 98% از طریق انتشار امواج مایکروویو از بلور سیلیکون تایید شد.

سایمون در سایت physicsworld.com بیان کرد: ما بطور موثر میلیون ها کپی از اطلاعات کوانتومی مشابه در جایی که تمام اسپین ها به یک صورت  رفتار می کنند؛ ایجاد کرده ایم. او در ادامه گفت: بخشی از فواید ایجاد کپی های زیاد تقویت اطلاعات کوانتومی است که بدین وسیله محققان می توانند تایید کنند که ذرات در حقیقت گیر افتاده اند.

جرمی ا براین (Jeremy O'Brien) محقق اطلاعات کوانتومی دانشگاه بریستول بر این پیشرفت مهم توافق دارد و بر اهمیت نمایش قابلیت یک سیستم اسپینی الکترون-هسته ای فسفردار با یک سیستم مشابه تاکید دارد. او گفت: کنترل منحصربفرد و خوانش برای محاسبات کوانتومی ضروری است چرا که توانایی گیر افتادگی بسیاری از سیستم های اسپینی را با یکدیگر خواهد داشت. شما به حالت یک سیستم اسپینی برای تاثیر بر حالت اسپینی دیگر در جهت کنترل واقعی قدرت کامپیوتر های کوانتومی احتیاج خواهید داشت.

سایمونز بیان کرد گروهش در حال حاضر روش های انتقال اطلاعات را تحقیق می کند که یکی از روش ها فرستادن پالس های الکتریکی از طریق مواد برای حرکت فیزیکی کوبیت های الکترونی است. او می گوید شخصا با امکان محاسبات کوانتومی و بازدهی قابل تعمیم آن برانگیخته شده است چرا که می تواند در جهت مطالعات علمی نظیر جاسازی پروتئین(یک فرآیند کلیدی بسیاری از واکنش های بیولوژیکی) استفاده شود.

به نقل از وبلاگ گروه فیزیک دانشگاه صنعتی اصفهان

لینک مقاله اصلی

لینک مقاله در وبلاگ

'Jumping' artificial atom is tracked in real time

Researchers in the US say they are first to watch a macroscopic "artificial atom" jumping between energy levels in real time. The new capability to continuously monitor the energy states of a superconducting quantum bit, or qubit, could help to correct errors in quantum computations, tightening the race between these solid-state systems and quantum computers based on trapped atoms.

An optimal measurement system for quantum computations must meet three tough conditions. For one, it must rarely misidentify states. Second, the measurement can't scramble the qubit's state, which is tricky because quantum states are easy to destroy. And finally, it must be fast – on the timescale of nanoseconds. This is essential for seeing quantum jumps, since many measurements must be made before the qubit changes state.

While these conditions were met 25 years ago for trapped atoms, Rajamani Vijay, Daniel Slichter and Irfan Siddiqi at the University of California, Berkeley are the first to score the hat trick using superconducting qubits – sometimes referred to as artificial atoms because of their discrete energy states.

The team did the experiment inside a cryogenic helium refrigerator cooled to 30 mK. The superconducting qubit is an aluminium circuit, a few hundred microns across but considered macroscopic, and the low temperatures brought out its quantum properties. As a nonlinear electrical oscillator, its energy levels were unevenly spaced. This allows the team to use microwaves at a frequency of 4.753 GHz to drive it only between its ground and first excited states – the qubit's 0 and 1 states.

Revealing and protecting

The researchers connect the qubit to the superconducting microwave cavity, an ordinary harmonic oscillator, through small capacitors. Because of this link, the cavity's preferred photon frequency changes based on the state of the qubit. The cavity could reveal information about the qubit while at the same time protecting it from noise.

To measure the qubit's state, the team generates higher-frequency microwave photons and admits them, no more than about 30 at a time, into the superconducting cavity. There, the photons interact with the qubit and acquire a phase shift depending on the qubit's state – 180° if the qubit is in its excited state, or 0° if it is in the ground state.

Now bearing the qubit's mark, the photons reflect out of the cavity toward the amplifier. Like the qubit, the amplifier is a nonlinear oscillator, this time designed to behave classically. Its superconductivity means low noise since it loses energy as heat.

The amplifier is finely tuned to accept a particular power, or rate of incoming photons, without changing their phase as they reflect back out. This is precisely the power it receives from a microwave source, which generated frequencies matching that of the signal photons.

Minuscule but important

However, in joining this stream of photons on the way to the amplifier, the signal photons exert a minuscule but important influence – adding a tiny bit more power if their phases haven't been shifted, or interfering destructively and slightly reducing the power if they have. The amplifier is so carefully balanced that it senses even this small difference and reacts dramatically. If the power is not what the amplifier expects, it imposes a phase shift of up to 90° in either direction on the photons it reflects. This shift is one way if the qubit is excited and the other way if it is in its ground state.

The amplifier magnifies the original signal from a few photons to hundreds of photons, making it large enough to withstand the noise introduced by common methods for increasing a signal. It also rapidly changes the phase of the photons, speeding up detection. "The combination of low noise and speed was crucial in observing quantum jumps for the first time," says Vijay.

The team extracts the qubit's state every 10 nanoseconds – plenty often enough to monitor the qubit's 320 nanosecond long excited state and notice when it jumped to the ground state. And now that close surveillance on a qubit has been achieved, the method can be set to work correcting errors in quantum computations.

Correcting errors

To do this, a piece of quantum information is stored across multiple qubits. If one of these qubits falls out of its state, the others can still maintain the shared quantum information, as long as the wayward qubit is brought back into line quickly. But up until now, there was no way to continuously monitor a superconducting qubit and catch it making the transition from one state to another.

"Superconducting quantum bits are without doubt one of the hot candidates in the ongoing race towards a full-scale quantum computer," says Jens Koch of Northwestern University in Evanston, Illinois. He calls the new monitoring system "a key step forward".

لینک مقاله

یادبودنامه آلبرت اینشتین

اینک در 67 سالگی می خواهم خود چیزی را بنویسم که در حکم یادبودنامه من است.این کار را تنها بدان سبب نمی کنم که دکتر شیلپ مرا به نوشتنش تشویق کرده است؛بلکه براستی بر این عقیده ام که نشان دادن چشم اندازی که آدمی از تلاشها و جستجوهای خود دارد به کسان دیگری که در کنار او می کوشند،کاری سودمند است.پس از مدتی تامل دریافتم که هر تلاشی از این قبیل چقدر نارسا خواهد بود؛زیرا هر قدر هم که زندگی فعال آدمی کوتاه و هر اندازه هم که راه خطا بر آن گشوده باشد،باز هم چیزهایی که شایسته در میان نهادن با دیگران است،به آسانی به ذهن خطور نمی کند.کسی که امروز 67 ساله است به هیچ روی کسی نیست که در 50 سالگی،یا 30 سالگی و یا در 20 سالگی بود.هر خاطره،از حال و هوای امروز آدمی تاثیر می پذیرد و بنابراین از دیدگاهی فریبنده دیده می شود.رعایت این نکته خود می تواند آدمی را از این کار باز دارد.با وجود این،چیزهای بسیاری را می توان از تجربه خود بیرون کشید که وجدانهای دیگر را بدان دسترس نیست.... .

پ.ن ۱: این مطلب و مقاله زیبا برگرفته از وبلاگ علمی و تحقیقی میباشد. دوستان عزیز برای مطالعه متن کامل این یادبودنامه میتوانند به آدرسهای زیر مراجعه نمایند.

قسمت اول

قسمت دوم 

قسمت سوم 

پ.ن ۲: ادامه قسمتهای این یادبونامه در همین پست قرار خواهد گرفت.

قسمت چهارم (۲۱ اسفنــدماه ۱۳۸۹)

قسمت پنجم  (۱۷ فروردین ماه ۱۳۹۰)

قسمت ششم  (۱۸ فروردین ماه ۱۳۹۰)

قسمت هفتـم  (۲۷ فروردین ماه ۱۳۹۰)

قسمت هشتــم (۲۴ اردیبهشت ماه ۱۳۹۰)

قسمت نهم (۲۵ اردیبهشت ماه ۱۳۹۰)

قسمت دهم (۲ خردادماه ۱۳۹۰)

قسمت یازدهم (۲۴ مردادماه ۱۳۹۰)

Neutron star has superfluid core

Neutron stars should exhibit both superfluidity and superconductivity, according to two independent groups of scientists. The researchers studied the neutron star in the supernova remnant known as Cassiopeia A, and found that its core should exist in a superfluid state at up to around a billion degrees kelvin, in contrast to the near absolute-zero temperatures required for superfluidity on Earth.

Neutron stars are extremely dense objects that form when massive stars run out of nuclear fuel and collapse in on themselves. The enormous pressure within the star forces almost all of the protons and electrons together to form neutrons. Astrophysicists would like to know more about the properties of this ultra-dense matter, and one way to do this is to study exactly how neutron stars cool. The object at the heart of Cassiopeia A, which is about 11,000 light-years away, is ideally suited to such an exercise because, unusually, it has both a well established age – about 330 years – and a well known surface temperature – around 2 million kelvin.

Last year, Craig Heinke of the University of Alberta in Canada and Wynn Ho of the University of Southampton in the UK analysed 10 years' worth of X-ray data from NASA's Chandra satellite and found that the Cassiopeia A neutron star's surface temperature has dropped more quickly than expected – by about 4% between 2000 and 2009.

Cooper pairs and neutrinos

In the current work, groups led by Dany Page of the National Autonomous University in Mexico and Peter Shternin of the Ioffe Institute in St Petersburg, Russia, say that this rapid cooling can be partly explained by invoking the zero-viscosity state of matter known as superfluidity. They argue that when the temperature of a neutron star falls below a certain critical value it becomes energetically favourable for neutrons inside the star to form Cooper pairs – the basic unit of the superfluid state – and that the energy released as a result could be easily removed from the star in the form of neutrinos.

But the two groups have found that this mechanism cannot account for all of the cooling, and they independently conclude that superconductivity must also play a role. They say that shortly after the creation of the neutron star, protons would combine to form Cooper pairs, so creating a superconducting state by virtue of their charge. Bound up in this way, the protons would not be able to take part in various neutrino-emitting reactions that occur in non-superfluid matter, reducing cooling early on in the life of the star and leading to a sharper drop in temperature later on.

However, there are alternatives to the superfluid/superconductor hypothesis, such as the idea that the rapid cooling was simply the natural consequence of the temporary heating created by an asteroid impact. Excluding this and other ideas will require further data from Chandra – with a subsequent rise in temperature suggesting that the star could in fact be experiencing asteroid impacts, while a continuation of the cooling would support the theory of the Page and Shternin groups.

Extremely high temperature superconductors

If this new model is correct then, as Wynn Ho, who is also a member of Shternin's group, points out, neutron stars would probably contain the hottest superfluids and superconductors in the universe. Indeed, Shternin's team has calculated that the Cassiopeia A neutron star should exhibit superfluidity when its temperature drops below about 800 million kelvin and that proton superconductivity could take place at up to 2–3 billion kelvin. Page and colleagues, meanwhile, calculate the superfluid transition temperature to be around 500 million kelvin.

These figures are in stark contrast to the 130 kelvin that is the highest temperature at which any material on Earth has been found to superconduct. But Ho cautions that we can't draw any practical tips from neutron stars, pointing out that the huge densities of these objects mean that particles are extremely closely packed and so act via the strong nuclear force, whereas on Earth superfluidity and superconductivity are mediated by the fundamentally different, and much weaker, electromagnetic force.

Ho does, however, believe that the latest work could lead to a better understanding of the strong force itself. This view is shared by Fridolin Weber of San Diego State University in the US, who was not a member of either group and who maintains that the research is "indispensable in improving our knowledge of the poorly known properties of ultra-dense matter". Marcello Baldo of the University of Catania in Italy, meanwhile, points to the novelty of being able to test the superfluid model against future observations. "This is a wonderful possibility," he says, "which has never happened before in this field."

Nature's building blocks brought to life

These colourful shapes are part of a project launched last week to create a periodic table of shapes to do for geometry what Dmitri Mendeleev did for chemistry in the 19th century. The three-year project could result in a useful resource for both mathematicians and theoretical physicists to aid calculations in a variety of fields from number theory to atomic physics. But those hoping to buy the wall chart may need to invest in a bigger house as there are likely to be thousands of these basic building blocks from which all other shapes can be formed.

"The periodic table is one of the most important tools in chemistry. It lists the atoms from which everything else is made, and explains their chemical properties," says project leader Alessio Corti, based at Imperial College in the UK. "Our work aims to do the same thing for three-, four- and five-dimensional shapes – to create a directory that lists all the geometric building blocks and breaks down each one's properties using relatively simple equations."

The scientists are looking for shapes, known as "Fano varieties", which are basic building blocks and cannot be broken down into simpler shapes. They find Fano varieties by looking for solutions to a variety of string theory, a theory that seeks to unify quantum mechanics with gravity. String theory assumes that in addition to space and time there are other hidden dimensions and particles can be represented by vibrations along tiny strings that fill the entire universe.

According to the researchers, physicists can study these shapes to visualize features such as Einstein's space–time or subatomic particles. For the shapes to actually represent practical solutions, however, researchers must look at slices of the Fano varieties known as Calabi–Yau 3-folds. "These Calabi–Yau 3-folds give possible shapes of the curled-up extra dimensions of our universe," explains Tom Coates, another member of the Imperial team.

Coates says that the periodic table could also help in the field of robotics. These machines are operating in increasingly higher dimensions as they develop more life-like movements. Robot engineers could use the new geometries discovered for the project to help them develop the increasingly complicated algorithms involved with robotic motion.

The periodic table project is an international collaboration between scientists based in London, Moscow, Tokyo and Sydney, led by Corti at Imperial College London and Vasily Golyshev in Moscow. Given the large time differences involved, the team communicates using social media including a project blog, instant messaging and a Twitter feed. Team member Al Kasprzyk, based at the University of Sydney, says, "These tools are essential. With some of us at working in Sydney while others are asleep in London, blogging is an easy way to exchange ideas and keep up to speed."

Will the LHC find supersymmetry?

The first results on supersymmetry from the Large Hadron Collider (LHC) have been analysed by physicists and some are suggesting that the theory may be in trouble. Data from proton collisions in both the Compact Muon Solenoid (CMS) and ATLAS experiments have shown no evidence for supersymmetric particles – or sparticles – that are predicted by this extension to the Standard Model of particle physics.

Supersymmetry (or SUSY) is an attractive concept because it offers a solution to the "hierarchy problem" of particle physics, provides a way of unifying the strong and electroweak forces, and even contains a dark-matter particle. An important result of the theory is that every known particle has at least one superpartner particle – or "sparticle". The familiar neutrino, for example, is partnered with the yet-to-be discovered sneutrino. These sparticles are expected to have masses of about one teraelectronvolt (TeV), which means that they should be created in the LHC.

In January the CMS collaboration reported its search for the superpartners of quarks and gluons, called squarks and gluinos, in the detector. If these heavy sparticles are produced in the proton–proton collisions, they are expected to decay to quarks and gluons as well as a relatively light, stable neutralino.

SUSY's answer to dark matter

The quarks and gluons spend the energy that was bound up in the sparticle's mass by creating a cascade of other particles, forming jets in the detector. But neutralinos are supersymmetry's answer to the universe's invisible mass, called dark matter. They escape the detector unseen, their presence deduced only through "missing energy" in the detector.

CMS physicists went hunting for SUSY in their collision data by looking for two or more of these jets that coincide with missing energy. Unfortunately, the number of collisions that met these conditions was no greater than expected with Standard Model physics alone. As a result, the collaboration could only report new limits on a variation of SUSY called constrained minimal supersymmetric standard model (CMSSM) with minimal supergravity (mSUGRA).

ATLAS collaborators chose a different possible decay for the hypothetical sparticle; they searched for an electron or its heavier cousin, the muon, appearing at the same time as a jet and missing energy. ATLAS researchers saw fewer events that matched their search and so could set higher limits, ruling out gluino masses below 700 GeV, assuming a CMSSM and mSUGRA model in which the squark and gluino masses are equal.

Good or bad omens?

Many believe that these limits are not bad omens for SUSY. The most general versions of the theory have more than a hundred variables, so these subtheories simplify the idea to a point where it can make predictions about particle interactions. "It's just a way to compare with the previous experiments," says CMS physicist Roberto Rossin of the University of California, Santa Barbara. "No-one really believes that this is the model that nature chose."

ATLAS collaborator Amir Farbin, of the University of Texas, Arlington, calls these first results an "appetiser" for the SUSY searches to be discussed at the March Moriond conferences in La Thuile, Italy. "At this point, we're not really ruling out any theories," he says.

Still, CMS scientists Tommaso Dorigo of the National Institute of Nuclear Physics in Padova, Italy, and Alessandro Strumia of the National Institute of Chemical Physics and Biophysics in Tallinn, Estonia, say that there is some cause for concern. Supersymmetry must "break", making the sparticles much heavier than their partners. It stands to reason that this should happen at the same energy as electroweak symmetry breaking – the point where the weak force carriers become massive while the photon stays massless.

This is thought to occur in the vicinity of 250 GeV. "But the LHC results now tell us that supersymmetric particles must be somehow above the weak scale," says Strumia.

Dorigo notes that although SUSY can allow for high sparticle masses, its main benefit of solving the hierarchy problem is more "natural" for masses near the electroweak scale. The hierarchy problem involves virtual particles driving up the mass of the Higgs boson. While supersymmetric particles can cancel this effect, the models become very complex if the sparticles are too massive.

John Ellis of CERN and King's College London disagrees that the LHC results cause any new problems for supersymmetry. Because the LHC collides strongly interacting quarks and gluons inside the protons, it can most easily produce their strongly interacting counterparts, the squarks and gluinos. However, in many models the supersymmetric partners of the electrons, muons and photons are lighter, and their masses could still be near the electroweak scale, he says.

Benchmark searches

CMS collaborator Konstantin Matchev of the University of Florida, Gainesville, explains that new physics was expected between 1 and 3 TeV – a range that the LHC experiments have hardly begun to explore. In particular, he notes that of the 14 "benchmark" searches for supersymmetry laid out by CMS collaborators, these early data have only tested the first two.

"In three years, if we have covered all these benchmark points, then we can say the prospect doesn't look good anymore. For now it's just the beginning," says Matchev.

But not everyone is optimistic about discovering SUSY. "We will get in a crisis, I think, in a few years," Dorigo predicts, sceptical of the theory because it introduces so many new particles of which data presently show "no hints". However, even though he would lose a $1000 bet, he says that he would still be among the first celebrating if the LHC does turn up sparticles.

آینده ی کیهان شناسی در این عالم تاریک، درخشان به نظر می رسد

کیهان شناسان می توانند خوشحال باشند: آنها از حالا به مدت یک تریلیون (1012) سال – حتی پس از آنکه انبساط عالم تقریباً همه ی کهکشان ها را به خارج از دیدرس ما فرستاده است – می توانند به کار خود ادامه دهند. این نتیجه گیری منجمی در ایالات متحده است، که استدلال می کند سیاهچاله ی عظیمی که در مرکز کهکشان ما است، ستاره هایی را از خود خارج می-سازد که کیهان شناسان آینده می توانند از آنها برای پی بردن به انبساط عالم استفاده کنند.
از اواخر دهه ی نود که منجمان از انفجار ابر نو اخترهای کهکشان های دوردست برای کشف این موضوع که شتاب انبساط عالم مثبت است استفاده کردند، آینده ی کیهان شناسی بی ثمر به نظر می رسید. پس از حدوداً صد میلیارد سال تقریباً تمام کهکشان ها آنقدر از ما دور شده اند که نورشان به ما نمی رسد. در نتیجه هیچ ناظری در آینده نمی تواند بفهمد که عالم در حال انبساط است. به علاوه تابش پس زمینه ی کیهانی – که به نوعی پستابهای انفجار بزرگ و مدرکی مهم از مبدأ عالم است – ضعیف تر از آنی می شود که قابل آشکارسازی باشد.


سناریوی استاندارد غلط است
در اکتبر 2010 آبراهام لوب (Abraham Loeb) در یک کنفرانس عمومی در مرکز اخترفیزیک هاروارد-اسمیتسونی در کمبریج، ماساچوست، این مشکلات را بازگو کرد. لوب که استاد نجوم در دانشگاه هاروارد است گفت: "مردم بسیار علاقه مند شده بودند و پس از جلسه پیش من آمدند. یک نفر گفت: "چرا مقاله ای در این باره نمی نویسید؟" و من گفتم: "در موردش فکر خواهم کرد"، و بعداً که در موردش فکر کردم متوجه شدم که درست نیست: در آینده ی دور، راهی برای بررسی سناریوی استاندارد کیهانشناسی که اکنون در اختیار داریم خواهد بود."
در طرح پیشنهادی لوب ستاره های ابر سریع راه گشا خواهند بود. در سال 1988 جک هیلز (Jack Hills)، که در آن زمان در آزمایشگاه بین المللی لوس آلاموس در نیو مکزیکو مشغول به کار بود، اعلام کرد که اگر یک ستاره ی دو تایی به سیاهچاله ی عظیم مرکز کهکشان راه شیری نزدیک شود، یکی آز آنها می تواند به درون سیاهچاله سقوط کند. این ستاره مقدار زیادی انرژی از دست خواهد داد و طبق قانون بقای انرژی جفت آن مقدار زیادی انرژی به دست خواهد آورد و با سرعت بسیار زیادی خواهد گریخت.

ستاره های ابر سریع نجات بخش خواهند بود
در سال 2005 وارن براون (Warren Brown) از مرکز اخترفیزیک هاروارد-اسمیتسونی و همکارانش کشف اولین ستاره¬ی ابر-سریع را اعلام کردند. منجمان از آن پس بیش از دهها ستاره ی دیگر پیدا کرده اند. لوب می گوید: "این ستاره های ابر سریع نجات بخش خواهند بود"، چونکه حتی یک تریلیون سال بعد نیز سیاهچاله ی مرکز کهکشان، ستاره به بیرون پرتاب می کند. این ستاره-ها احتمالاً کوتوله های سرخ خواهند بود، خورشیدهای تاریکی که برای تریلیون ها سال می توانند زنده بمانند.
اما لوب انتظار دارد خیلی پیش از آن کهکشان راه شیری و آندرومدا که 2.5 میلیون سال نوری با ما فاصله دارد، به هم بپیوندند و کهکشان بزرگتری درست کنند که او آن را "میلکومدا" (Milkomeda) می نامد. همینکه یک ستاره ی ابر سریع میلکومدا را ترک کند، گرانش کهکشان ابتدا آن را کند می کند، اما سرانجام انبساط تند شونده ی عالم حرکت آن را تسریع می نماید. لوب می-گوید: "با دنبال کردن حرکت این ستاره ها، یک کیهان شناس در آینده می تواند به وجود ثابت کیهان شناختی پی ببرد". ثابت کیهان شناختی نمایانگر نیروی دافعه ی فضای خالی است و باعث می شود شتاب انبساط عالم مثبت باشد.
هرچه جرم میلکومدا بیشتر باشد، فاصله ای که انبساط عالم در آن بازنمود خواهد داشت بیشتر خواهد شد. لوب محاسبه کرده است که اگر میلکومدا جرمی 2 تریلیون برابر جرم خورشید داشته باشد، این فاصله ی گذار در مسافتی به اندازه ی 4.4 تریلیون سال نوری از ما رخ خواهد داد. اما اگر میلکومدا 10 تریلیون برابر خورشید جرم داشته باشد، این فاصله 7.5 میلیون سال نوری خواهد بود.

به اندازه ی کافی روشن نیست؟
لارنس کراوس (Lawrence Krauss) در دانشگاه ایالت آریزونا در تمپ، که پیش از این پتانسیل ستاره های فوق سریع را بررسی کرده بود، در این باره کمی مشکوک است. کراوس می گوید: "در حال حاضر با وجود ابر نو اختر ها – روشنترین اجسام موجود در عالم – ما به سختی قادر به کشف وجود یک ثابت کیهان شناختی هستیم. اینکه بتوانیم از یک ستاره ی منحصر به فرد برای اندازه گیری انبساط عالم استفاده کنیم ممکن است به سختی از نظر فیزیکی ممکن باشد، اما چقدر محتمل است؟". کراوس فکر نمی کند که تمدنی در یک تریلیون سال بعد به میلکومدا و کهکشان های اطرافش به عنوان کل عالم که توسط فضای خالی و سکون احاطه شده است نگاه کند و انگیزه ای برای صرف مبالغ عظیم به منظور دنبال کردن تغییرات جزئی در سرعت چند ستاره-ی دور و تاریک نداشته باشد.
لوب در جواب می گوید که این ستاره ها هزارها برابر نزدیک تر از دورترین ابر نو اخترهای امروزی خواهند بود و آن تمدن یک تریلیون سال وقت خواهد داشت تا یک تلسکوپ بزرگ برای مطالعه ی ستاره های در حال فرار تهیه کند. لوب مقاله اش را در ژانویه¬ی امسال پس از یک بوران عظیم در نیو انگلند نوشت. او در این باره می گوید: "باعث شد تا کسی مزاحمم نوشد و خواسم را پرت نکند".

 ترجمه از: سپهــر

لینک مقاله اصلی

لینک مقاله در وبلاگ

کهکشان های غنی از گاز، پیش بینی نظریه ی اصلاح شده ی گرانش را تأیید می کند

طبق جدیدترین بررسی های پروفسور استیسی مک گائو (Stacy McGaugh) از دانشگاه مریلند، آخرین داده های گرفته شده از کهکشان های غنی از گاز با دقت بالایی با پیش بینی های یکی از نظریه های گرانش اصلاح شده به نام موند (MOND) مطابقت دارد. مک گائو اشاره می کند: "این نتایج که جزو آخرین پیش بینی های موفق موند است، سؤالات جدیدی در مورد صحت مدلهای کیهان شناختی حاکم بر می انگیزد."
بر طبق کیهان شناسیِ نوین، برای آنکه عالم رفتاری مطابق آنچه از آن مشاهده می کنیم داشته باشد، جرم و انرژی آن باید عمدتاً از ماده ی تاریک و انرژی تاریک تشکیل شده باشد. با این وجود مدرک صریحی برای وجود این عناصر نامرئی وجود ندارد. توضیح نه چندان محبوب دیگری نیز وجود دارد که می گوید نظریه ی کنونی گرانش برای توضیح دینامیک سیستمهای کیهانی کفایت نمی کند.
تعداد اندکی نظریه که درک ما را از گرانش اصلاح می کند نیز پیشنهاد شده اند. یکی از این نظریه ها دینامیک اصلاح شده ی نیوتونی (Modified Newtonian Dynamics – MOND) است که در سال 1983 توسط موتی میلگروم (Moti Milgrom) ارائه شد. یکی از پیش بینی های موند، رابطه ی بین جرم کهشکشان و سرعت دورانی تخت آن را مشخص می کند. با این حال، پیش از این خطای موجود در تخمین جرم ستاره های موجود در کهکشانهای مارپیچی (همچون کهکشان راه شیری) از انجام آزمایشی قطعی در این زمینه جلوگیری می کرد.
برای اجتناب از این مشکل مک گائو کهکشان های غنی از گاز، که نسبتاً تعداد ستاره های کمتری دارند و بیشتر جرمشان به صورت ساختارهای گازی میان-ستاره ایست، را بررسی کرد. او می گوید: "ما فیزیک جذب و نشر انرژی اتمها را در ساختارهای گازی میان-ستاره ای می دانیم، به صورتی که شمارش فوتون ها در این ساختار ها همانند شمارش اتمهاست. این کار به ما اجازه می دهد تخمین دقیقی از جرم کهکشان داشته باشیم."
مک گائو 47 نمونه از آخرین داده هایی که خود و دیگر دانشمندان در زمینه ی تعیین جرم و سرعت دورانی تخت چند کهکشان غنی از گاز تهیه کرده بودند را جمع آوری نمود و آنها را با روابط پیش بینی شده در موند مقایسه کرد. نتایج هر 47 نمونه بسیار به پیش بینی های موند نزدیک بوند. به علاوه هیچ مدلی از ماده ی تاریک در این محاسبات اعمال نشده بود.
مک گائو اضافه می کند: "به نظر من اینکه پیش بینی های میلگروم بعد از یک چهارم قرن چنین با نتایج به دست آمده از کهکشان-های غنی از گاز مطابقت دارد، بسیار چشم گیر است."
تقریباً همه بر سر اینکه عالم در ابعاد خوشه های کهکشانی بزرگ و بالاتر، به خوبی با نظریه ی ماده ی تاریک-انرژی تاریک توصیف می شود توافق دارند. اما به گفته ی مک گائو چنین کیهان شناسی ای به ما نمی گوید که در ابعاد کهکشان ها و کوچکتر از آن چه اتفاقی می افتد. او می گوید: "در مورد موند قضیه برعکس است. موند مقیاس های کوچک را به خوبی توضیح می دهد اما در مورد مقیاس های بزرگ اطلاعات چندانی نمی دهد. البته می توان با فرض ماده تاریک شروع کرد و پارامترها را به گونه ای تنظیم نمود که در مورد یافته های فعلی جوابگو باشد، اما این کار به اندازه ی پیش گویی نتایج قبل از بدست آوردن آنها با شکوه نیست، به خصوص که ما نمی توانیم ماده ی تاریک را مستقیماً ببینیم. می توانیم هر گونه تطبیقی که لازم باشد در پارامترها ایجاد کنیم، اما این کار مانند گنجاندن حرکت سیارات در مدار دوایر بطلمیوسی است." دوایر بطلمیوسی توسط بطلمیوس، دانشمند یونان باستان استفاده شده بود تا اکتشافات رصدی جدید را در مدل کیهانشناسی زمین مرکز بگنجاند.
مک گائو می پرسد: " اگر نظر ما در مورد وجود ماده ی تاریک درست است، پس اصلاً چرا نظریه ی موند کار می کند؟ سرانجام نظریه ای درست - چه ماده تاریک، چه اصلاحی بر گرانش – باید این مسئله را توضیح دهد."

برای مشاهده متن اصلی مقاله که ترجمه از آن صورت گرفته به ادامه مطالب مراجعه نمایید.

لیتک پیش نویس مقاله اصلی

ترجمه از: سپهــر

ادامه نوشته

Telescope team plans to track the whole sky

A European project that will allow astrophysical events to be tracked across the whole sky for the first time has begun and is already recruiting its personnel. Funded with €3m from the European Research Council over the next five years, the 4 Pi Sky project will use a combination of ground- and space-based telescopes to study rare events such as colliding neutron stars and exploding supernovae.

4 Pi Sky combines three separate terrestrial telescopes systems. One is the Low Frequency Array (LOFAR), consisting of some 10,000 dipole antennas across Europe, that will be used to track objects at a frequency range of about 30–240 MHz. The others are the MeerKAT array in South Africa and the Australian Square Kilometre Array Pathfinder (ASKAP) in Western Australia, which will be used to track phenomena at higher frequencies of about 1 GHz.

Linking telescopes

When combined, the telescopes will be able to monitor the whole sky, as scientists will be able to link from telescope to telescope to follow transient phenomena as the Earth rotates. Using this technique, researchers are expected to find and track thousands of new events that would have previously been missed.

"This is really a project that will try and co-ordinate what we can do with these telescopes in terms of optimizing their performance and getting the software up and running," says Rob Fender, an astronomer at the University of Southampton in the UK, who is leader of the 4 Pi Sky project. He adds that it will take at least another three years until all three telescopes come online and are fully networked due to the construction timelines for MeerKAT and ASKA. Part of the network, however, will be working on LOFAR this year.

In addition to the ground-based telescopes, 4 Pi Sky researchers will also be able to use the Japanese Space Agency's Monitor of All-sky X-ray Image (MAXI) telescope aboard the International Space Station, which studies events at wavelengths between 0.5 and 30 keV. "A lot of transient phenomena that we study with the radio telescopes has an X-ray component and we will be able to study that with MAXI," says Fender.

As well as studying the birth of black holes, the project might even help researchers to identify the sources of gravitational waves. Although 4 Pi Sky will not be able to detect such waves directly, if these ripples in space–time are spotted by another facility, then astronomers will be able to use 4 Pi Sky see if the production of the waves was accompanied by an explosion of radio signals.

SKA on the horizon

Meanwhile, two rival teams of astronomers in South Africa and from Australia and New Zealand are continuing to compete for the right to host the Square Kilometer Array (SKA), which will combine the signals from thousands of small antennae spread over a total collecting area of approximately one square kilometre. Both MeerKAT and ASKAP are being built as technology demonstrators for SKA, which will be a radio telescope capable of extremely high sensitivity and angular resolution.

In the past week, the South Africa campaign is claiming to have made two major breakthroughs in its bid to host the telescope array. First, they have combined data from two radio telescopes at separate locations using a technique known as "very long baseline interferometry", which had previously required assistance from foreign countries. Second, South African computer engineers have also finished building the computer hardware, ROACH 2, that they believe will provide the data-processing capabilities for SKA.

"In 2011 South Africa in conjunction with its eight African-partner countries bidding communally for the SKA will pull out all the stops to show the world that Africa is the future as far as science and technology are concerned," says Bernie Fanaroff, director of South Africa's SKA Project. A final decision on who is to host SKA will be made in 2012 based on a number of criteria, including operating and infrastructure costs and the levels of interference from sources such as mobile phones and televisions.

Physicists create 'anti-laser'

In a fascinating case of physics being turned on its head, a group of researchers at Yale University in the US has created an "anti-laser" that almost perfectly absorbs incoming beams of coherent light. The invention is based on a theoretical study reported last summer in which Douglas Stone and his Yale colleagues claimed that such a system could be possible in a device that they call a coherent perfect absorber (CPA). Instead of generating coherent light beams with a laser, the devices absorb incoming coherent light and convert it into either heat or electricity.

Now, having teamed up with experimental physicists at Yale, Stone has built a version of the device by creating an "interference trap" inside a silicon wafer. Two laser beams – originally split from a single beam – are directed onto opposite sides of the wafer and their wavelengths are fixed so that an interference pattern is established. In this way, the light waves get stalled indefinitely, bouncing back and forth within the wafer, with 99.4% of both beams being transformed into heat.

The group argues that there is no theoretical reason why 100% of the light could not be absorbed using the technique. The researchers are also confident that the current size of the device, 1 cm in diameter, can be reduced to just 6 µm. "It is surprising that the possibility of the 'time-reversed' process of laser emission has not been seriously discussed or studied previously," says Stone.

Focus on applications

Stone's group believes that its "anti-laser" could prove to have many exciting applications. These might include filters for laser-based sensors at terahertz frequencies for sniffing out biological agents or pollutants, which requires detecting a small backscattered laser signal against a large background of thermal noise.

Another idea is to use the device as a type of shield in medical applications to enable surgeons to fire laser beams at unwanted biological tissue, such as tumours, with greater accuracy. "With our technique an appropriately engineered incident set of light waves could penetrate deeply into such a medium and be absorbed only at the centre, enabling delivery of energy to a specified region," explains Stone.

The group also speculates that by adding another "control" beam it could control the device to toggle between near complete absorption and 1% absorption. This property could enable the devices to function as optical switches, modulators and detectors in semiconductor integrated optical circuits.

One limitation of all such devices, however, is that they will only work at specific wavelengths, meaning that the technology will not be particularly useful in photovoltaic cells or cloaking devices.

دومین دوره مسابـقه عکـاسی فیـزیـکی

مقدمه:
در دنیای امروز بازی هیجان انگیزی به راه افتاده که تمام انسانها به ان دعوت شده اند، کشف جهان.
مسئله این است که هر فرد بتواند دید جست‌و‌جوگر و خلاق خود را پرورش داده و با مشاهده‌ی هر پدیده به خاص بودن آن پی برده و به دنبال توضیح و توجیه علمی آن باشد.
در این راستا بر آن شدیم تا با پیوند علم و هنر و تأکید بر نقش مکمل این دو در ایجاد بینش آدمی از جهان اطرافش رقابتی تحت عنوان «مسابقه‌ی عکاسی فیزیکی» برگزار کنیم. عکس‌هایی که دراین مسابقه شرکت داده می‌شوند باید بتوانند قوانین فیزیکی حاکم بر دنیای پیرامونمان را نمایش دهند. محدودیتی برای انتخاب سوژه‌ها وجود ندارد و عکاسی می‌تواند از پدیده‌های کاملاً طبیعی، پدیده‌هایی که انسان و ساخته‌های دست او در ایجاد آنها دخالت دارد و یا پدیده‌های طرح ریزی شده توسط عکاس انجام گیرد.

فراخوان دومین دوره ی مسابقه ی ملّی عکاسی فیزیکی:
کمیته ی مسابقات انجمن علمی دانشکده ی مهندسی هسته ای و فیزیک دانشگاه صنعتی امیرکبیر (پلی تکنیک تهران) با همکاری “انجمن فیزیک ایران” (شاخه دانشجویی) دومین دوره ی مسابقه ی عکاسی فیزیکی را برگزار می کند.
مهلت ارسال آثار پایان اسفند ماه 1389
زمان برگزاری نمایشگاه از آثار برگزیده ی ارسالی، نیمه ی دوم فروردین ماه 90 برگزار می شود. لازم به ذکر است پس از پایان نمایشگاه اختتامیه ای برگزار خواهد شد که در طی آن به 3 اثر برگزیده اساتید جوایز ارزنده اهدا می گردد.
همچنین به تمامی آثار برگزیده راه یافته به نمایشگاه گواهی شرکت در مسابقه اهدا خواهد شد.
قابل توجه است که عکس های ارسالی از دو منظر فیزیکی و هنری توسط داوران زیر بررسی خواهند شد:
• جناب آقای دکتر حسین عباسی-عضو هیئت علمی دانشکده مهندسی هسته ای و فیزیک دانشگاه صنعتی امیرکبیر و IPM
• جناب آقای دکتر حامد سید علائی-عضو هیئت علمی فیزیک IPM
• جناب آقای کاوه فرزانه-عضو هيئت كميته فني انجمن عكاسان نگاه- خانه عكاسان ايران


آیین نامه وشرایط شرکت در دومین دوره مسابقه ی ملی عکاسی فیزیکی:
1. عکس گرفته شده می بایست متعلق به شخص حقیقی، که نام او روی بسته ی ارسالی نوشته شده، باشد. در صورت محرز شدن هرگونه تخلف و سو استفاده از عکس های دیگران، عکس ارسالی از مسابقه حذف شده و مراتب قانونی اعمال می شود.
2. عکس‌ها نباید دستکاری و مونتاژ شده باشند در حالی که تنظیمات مربوط به رنگ، نور، شفافیّت و … بلامانع است.
3. به ازای هر عکس یک فایل word شامل عنوان عکس، توضیح و توجیه علمی پدیده‌ی به تصویر کشیده شده و یا شرایط خاص مربوط به نحوه‌ی عکسبرداری با حجم حداکثر ۲۰۰ کلمه بر روی CD ارسال شود که با شماره‌‌ی عکس مربوطه نامگذاری شده. نداشتن توضیح فیزیکی برای پدیده ی رویت شده، به منزله ی از دست دادن بیش از پنجاه درصد امتیاز داوری خواهد بود و عملاً نمی تواند در بخش برندگان قرار گیرد.
4. عکس ارسالی می بایست علاوه بر نشان دادن یک پدیده ی فیزیکی، آن را به گونه ای زیبا و هنری نمایش دهد، چرا که زیبایی اثر نیز در بخش داوری دارای امتیاز می باشد.
5. از جمله نکاتی که یک عکاس فیزیکی باید رعایت کند، سادگی، زیبایی، خلاقیت و ذوق فیزیکی است. به این معنا که در توضیح و تفسیر پدیده ها، هر چقدر کمتر از فرمول ها و روابط ریاضی استفاده شود بهتر است. چرا که مخاطب این عکس ها می تواند فردی عامی باشد.
6. برگزار کننده حق استفاده از عکسهای پذیرفته شده را همراه با ذکر نام عکاس در انتشارات و وبگاه‌های مربوط به مسابقه برای خود محفوظ می‌دارد.

روش و شرایط ارسال عکس ها:
1. کیفیت عکس ها می بایست حداقل300dpi باشد. در صورت پایین بودن کیفیت عکس، اثر در مرحله ی داوری هنری حذف شده و به مرحله ی داوری علمی نمی رسد.
2. فایل عکس ها می بایست jpg و یا tiff باشد.
2. هر عکاس حداکثر می تواند 10 عکس ارسال کند.
4. به ازای هر عکس، می بایست یک فایل word شامل نام عکس و توضیح علمی آن ارسال شود.
5. فایل عکس ها می بایست شماره گذاری شود و فایل word مربوط به هر عکس نیز همان شماره را داشته باشد.
6. علاوه بر توضیحات، یک فایل word دیگر شامل مشخصات فردی نیز باید ارسال شود.
7. مشخصات فردی شامل: 1- نام و نام خانوادگی 2- شماره تلفن همراه عکاس و آدرس 3- مدرک تحصیلی و رشته 4- آدرس پست الکترونیک
8. تمام فایل های توضیح داده شده (فایل عکس ها، توضیحات و مشخصات فردی) می بایست بر روی یک CD رایت شده و به آدرس دبیرخانه ارسال شود.

لازم به ذکر است این دوره از مسابقات با حمایت مالی شرکت “تک درخت سبز صنعت” برگزار می¬شود.
آدرس دبیرخانه: تهران، خیابان حافظ، روبروی خیابان سمیه، دانشگاه صنعتی امیرکبیر، انجمن علمی دانشکده مهندسی هسته ای و فیزیک
صندوق پستی ۱۵۸۷۵-۴۴۱۳

وب سایت رسمی مسابقه

Spinning black holes twist light

Light passing near to the spinning black holes thought to reside at the centre of many galaxies becomes twisted, possibly offering a new way to test Einstein's general theory of relativity. That is the conclusion of an international team of physicists, who say the phenomenon could be seen with existing telescopes.

The general theory of relativity (GR), put forward by Einstein more than 90 years ago, predicts few phenomena that can be easily tested. One example is gravitational lensing – that the gravity of stars and black holes can warp space–time enough to bend the passage of light. Another is time dilation, which makes clocks sitting in regions of lower gravity – say, at high altitudes – tick faster. Scientists are still trying to directly detect yet another general-relativity phenomenon called gravitational waves. These are ripples in space–time thought to be generated when large masses accelerate.

In 2003 Martin Harwit of Cornell University hinted that there might be one more testable effect to add to this GR toolbox. He was discussing a property of photons called orbital angular momentum (OAM). This is distinct from the more familiar intrinsic spin angular momentum of the photon, which is related to the circular polarization of light.

Interesting effects

A straightforward way to detect the OAM of photons in the lab had just been discovered in 2002 and, according to Harwit, there could be several astrophysical applications of OAM, including the investigation of spinning black holes. "A full theoretical investigation of such effects would be of interest," he concluded. Now, in a paper published today in Nature Physics, a group of physicists led by Bo Thidé of the Swedish Institute of Space Physics in Uppsala have done just that.

The team performed numerical calculations of light passing spinning black holes, which are thought to account for most of the black holes in the cosmos. Around these ultra-dense objects, space–time becomes twisted in an effect called frame dragging. When light enters this region, say Thidé and colleagues, its normally flat wavefronts become twisted too, taking on a corkscrew shape and a change in OAM. The faster the black hole spins, the greater the change in OAM, say the researchers.

"This is a nice, and seemingly sound, piece of theoretical analysis couched in the framework of modern optical theory," said Gary Gibbons, a theorist specializing in general relativity at Cambridge University. Marcus Werner at Duke University commented "This could be rather significant, since it would open up an entirely new observational method."

Tantalizing possibility

To test the researchers' prediction, astrophysicists would need to examine the phase of photons using radio telescopes such as the Very Long Baseline Array at Socorro in New Mexico, US. If the prediction is borne out in measurements, general relativity would be further reinforced as a theory. If it isn't – a remote yet tantalizing possibility – there is the chance general relativity is not telling the whole story about space–time.

Martin Bojowald, at Pennsylvania State University, suggests the possibility that the OAM prediction could allow the direct detection of spinning black holes – a feat never accomplished despite widespread acceptance of their existence. Although nominally "black", black holes are thought to emit a haze of photons called Hawking radiation, but this is so faint it is currently impossible to see over the universe's background radiation. However, Bojowald believes the change in OAM might one day be just enough of a signature to filter out Hawking radiation for observation.

"New calculations of the quantum processes that generate Hawking radiation are required, but before one can address that, the twisting of light already opens the way to exciting new possibilities in black-hole physics," he says.

DNA puts a new spin on electrons

A new and highly efficient way of filtering electrons according to their spin has been built using double strands of DNA. The technique, which has been developed by physicists in Israel and Germany, is about three times more efficient than using magnet-based spin filters. The method could be used in spintronic circuits, which exploit both the spin and charge of electrons, and could even lead to a better understanding of the possible role that spin plays in biological processes.

Spintronics holds great promise for creating circuits that are faster and more energy efficient than standard semiconductor devices. This is because the energy required to transport and process spins is much less than that needed to create electron currents. Creating spins is not a problem as magnetic metals such as iron are full of them. The challenge, however, is extracting the spins to form a spin-polarized current and injecting them into a circuit without the polarization degrading along the way.

Today, spins are often made using a filter that exploits the phenomenon of giant magnetoresistance (GMR). This involves passing a current of unpolarized electrons through a material containing alternating layers of magnetic and non-magnetic material in the presence of a magnetic field. In principle, only electrons with their spin pointing in the "up" direction can pass through the filter, but the currents obtained by the device are never entirely pure, with a significant fraction of the electrons emerging spin "down".

Dense forest of DNA

Now, however, Ron Naaman and colleagues at the Weizmann Institute in Israel and the University of Münster in Germany have found that a 60% spin polarization at room temperature can be achieved by passing free electrons through a gold surface covered with a densely packed layer of DNA strands. Although DNA does not normally adhere to gold, the researchers treated one end of each strand with a sulphur compound to make it stick. The result is a dense forest of DNA chains all standing tall on the gold surface.

The researchers then shone a laser on to the gold, which liberates electrons via the photoelectric effect. Some of these electrons travel through the DNA forest and are fed into a device that measures their spin polarization. The team performed the experiment using linearly polarized laser light, which liberates unpolarized electrons. However, after travelling through the DNA, the electrons became polarized by as much as 60%.

The longer the better

The researchers found that the polarization was a strong function of the length of the DNA strands – with 80 base-pair-long strands giving 60% polarization but 25 base pairs only yielding about 10%. The team also found that the filter does not work when the DNA coverage is sparse, suggesting that the electrons are polarized by interactions with the lattice of strands, rather than individual strands.

Despite the strength of polarization effect, Naaman told physicsworld.com that the researchers are not certain why the effect occurs, but he believes that it is probably related to the "handedness", or "chirality" of the DNA double helix. While other physicists have shown that passage through a vapour of chiral molecules can affect the spin polarization of electrons, the effect is minuscule compared with what is seen with DNA. As a result the interaction at work in the vapour – spin–orbit coupling – is simply too weak to explain these recent results, according to Naaman.

Geert Rikken of the CRNS High Magnetic Field Laboratory in Toulouse, France, speculates that the effect could be a "Bragg-like resonance", which is a diffraction effect that occurs because the De Broglie wavelength of the electrons is about the same as the lattice spacing of the DNA strands. He points out that a similar spin-filtering of photons due to Bragg diffraction has been seen in cholesteric liquid crystals, which also have a helical structure. To gain a better understanding of the physics at work in the filter, the team is now studying the polarization of electrons that flow through the DNA strands, rather than the free electrons that travel past the strands.

Benefits of DNA

Looking ahead, Naaman believes that spin devices based on organic materials such as DNA could offer several benefits. One is that spin-polarized currents should travel further in such materials – compared with metals – because the strength of the spin–orbit coupling is much smaller and because the spins are less likely to interact with vibrations in the material. Another benefit is that the ends of the DNA can be modified with a wide range of chemicals, which could make it possible to connect DNA devices to spintronic circuits in such a way that the spin polarization is not degraded at the connection.

However, Rikken is more cautious about the work. "I do not think that DNA films would be a welcome component in spintronic devices," he says. But he does think that other chiral structures could find application in spintronics – if chirality is found to be the mechanism behind the filtering, that is.

Beyond spintronics, the discovery that DNA has a strong effect on electron spin suggests that spin interactions could also play a role in some biological processes. Indeed, Naaman believes that studies of spin in biomolecules could shed light on poorly understood low-energy biochemical processes that occur in nature.

Future of cosmology looks bright in a dark universe

Cosmologists can rejoice: they'll still be able to do their jobs a trillion (1012) years from now – even after the universe's expansion has pushed nearly all galaxies out of sight. That's the conclusion of an astronomer in the US, who argues that the giant black hole at the centre of our galaxy will eject stars into the void beyond, providing objects that future cosmologists can use to trace the universe's expansion.

Ever since the late 1990s, when astronomers used supernova explosions in distant galaxies to discover that the universe's expansion is accelerating, the far future of cosmology has seemed bleak. Within roughly 100 billion years, nearly all other galaxies will be so distant that their light won't reach us. As a result, future observers won't know that the universe is expanding. Furthermore, the cosmic microwave background – which is the Big Bang's afterglow and a key clue to the universe's origin – will be attenuated below the threshold at which it can be detected.

Standard scenario is 'wrong'

In October 2010 Abraham Loeb gave a public talk at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, in which he recounted these difficulties. "People were very intrigued and came to me afterwards," says Loeb, who is professor of astronomy at Harvard University. "Someone said, `Why don't you write an article about that?' And I said, `Well, I'll think about it,' and then when I thought about it I realized that it's wrong: there will be a way in the distant future to verify the standard cosmological scenario that we now have."

The key to Loeb's proposal are so-called hypervelocity stars. In 1988 Jack Hills, then at Los Alamos National Laboratory in New Mexico, said that if a double star skirted close to the giant black hole at the Milky Way's centre, one star in the pair could fall toward the black hole. This star would lose an enormous amount of energy, and by the principle of conservation of energy its mate would gain enormous energy and fly away at high speed.

'Hypervelocity stars will save the day'

Then, in 2005, Warren Brown of the Harvard-Smithsonian Center for Astrophysics and his colleagues announced the discovery of the first hypervelocity star. Astronomers have since spotted more than a dozen others. "These hypervelocity stars will save the day," says Loeb. That's because, even a trillion years from now, the black hole at the galaxy's centre will still be ejecting stars. These stars will probably be red dwarfs, dim suns that can live for trillions of years. By tracking the motions of these stars after they leave the galaxy, astronomers can deduce the universe's expansion.

Long before then, Loeb expects the Milky Way to merge with the Andromeda Galaxy, 2.5 million light-years away, producing an even greater galaxy he calls "Milkomeda". As a hypervelocity star leaves Milkomeda, the galaxy's gravity first slows it down. But eventually the accelerating universe speeds the star up. "By monitoring the motion of these stars, a future cosmologist could infer the existence of the cosmological constant," explained Loeb. The cosmological constant represents the repulsive force of empty space; it's what's causing the cosmic expansion to accelerate.

The more mass Milkomeda has, the farther is the distance at which the universe's expansion will manifest itself. If Milkomeda has 2 trillion times the mass of the Sun, Loeb calculates this transition distance occurs 4.4 million light-years from us. If instead Milkomeda has 10 trillion solar masses, then this transition distance is 7.5 million light-years.

Not bright enough?

However, Lawrence Krauss at Arizona State University in Tempe, who had earlier considered the potential of hypervelocity stars, is skeptical. "Right now, with supernovae – the brightest objects in the universe – we're barely able to discover the existence of a cosmological constant," says Krauss. "To propose that you can use individual stars to measure the expansion of the universe – it may be barely physically possible, but is it likely?" Krauss thinks not, that a civilization a trillion years from now would probably consider Milkomeda and its satellite galaxies to be the entire universe, surrounded by a static void, and would have no incentive to spend vast sums of money tracking subtle changes in the velocities of a few dim and distant stars.

Loeb counters that these stars will be a thousand times closer than today's farthest supernovae and that a civilization will have a trillion years to develop a giant telescope to study the escaping stars.