tag:blogger.com,1999:blog-47829816206448319432024-03-13T17:16:06.083+00:00skolimaskolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.comBlogger112125tag:blogger.com,1999:blog-4782981620644831943.post-22616579655564627982022-02-27T09:31:00.002+00:002022-02-27T09:48:20.504+00:00Буча / Bucza / Bucha<p>There was fighting tonight in Bucha. Russian occupying heavy military vehicles roll on the streets this morning. Less than a year ago my aunt visited this quiet town in the Kyiv suburbs to commemorate, with the local council, the works of her great-grandfather who was the town’s doctor. There was tea, and a school play, and cake, and a small handcraft exhibition. <a href="https://twitter.com/nexta_tv/status/1497838808104116227">There are now tanks in the street.</a></p>
<p>Like most people in Poland, I’ve got roots in Ukraine. Another family branch grows from the shared tragedy of Wołyń. I can't not get emotional about the subject.</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhU3Z2GWUS6MxDDtWkh6jIqoKAdDA4NfYJW510wWVT5e1hcLUVkoLtPp-y7fYLRCZ_ORlQZdghMS2jwdHEgHT7xL3UnZbqeAYKah9xx6DNl0nb56tlFm9_HGRlrxNX0B6GZw3MDFOzyQhQXdWsjr4wB0SC3OehZG5i3Q7VZoDxphId1lrSsi4-1m3CD=s4032" style="display: block; padding: 1em 0; text-align: center; "><img alt="Ukrainian-Polish border at Сянки/ Sianki / Syanki" border="0" width="520" data-original-height="3024" data-original-width="4032" src="https://blogger.googleusercontent.com/img/a/AVvXsEhU3Z2GWUS6MxDDtWkh6jIqoKAdDA4NfYJW510wWVT5e1hcLUVkoLtPp-y7fYLRCZ_ORlQZdghMS2jwdHEgHT7xL3UnZbqeAYKah9xx6DNl0nb56tlFm9_HGRlrxNX0B6GZw3MDFOzyQhQXdWsjr4wB0SC3OehZG5i3Q7VZoDxphId1lrSsi4-1m3CD=s400"/></a></div>
<p>But I also get angry. The madman dictator with the world’s largest nuclear arsenal is invading right at our doorstep and he won’t stop at Ukraine. His land claims - based on XIXth century empire borders - include half of all the EU countries. Read that again: half of the EU members would have to fall under his rule to satisfy his current demands. He has threatened nuclear strikes, he has threatened dropping the International Space Station on Europe. This isn’t something you can safely ride out, hiding far away, and ignore.</p>
<p>Please help any way you can. Write to your MPs. Protest. Give to charities. Take in refugees.</p>
<p><a href="https://bank.gov.ua/en/news/all/natsionalniy-bank-vidkriv-spetsrahunok-dlya-zboru-koshtiv-na-potrebi-armiyi">I am giving all my non-essential income this month to helping Ukraine.</a></p>
skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-4035645260765285112021-12-30T14:05:00.000+00:002021-12-30T14:05:00.679+00:00Best books I've read in 2021<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiR0z8aRrsAwaCjgRjmZ_2pgQRILyJPnP1dIEI1BsTrtHAEFhsjYF2BKByxxB9jrcIKhpgiQ1quz2LVIXMsNMkWF7hbuqKoS_FFkD9309wPexJPjFA5L1UHtgc5YL8msjbsKymhEzIbY197eIb5UnpFW1GC2iRLBsbSEX7077zW3_M4ST-C0xxk14RG=s4032" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="3024" data-original-width="4032" src="https://blogger.googleusercontent.com/img/a/AVvXsEiR0z8aRrsAwaCjgRjmZ_2pgQRILyJPnP1dIEI1BsTrtHAEFhsjYF2BKByxxB9jrcIKhpgiQ1quz2LVIXMsNMkWF7hbuqKoS_FFkD9309wPexJPjFA5L1UHtgc5YL8msjbsKymhEzIbY197eIb5UnpFW1GC2iRLBsbSEX7077zW3_M4ST-C0xxk14RG=s400"/></a></div>
<p>Last January, I finally caught up with William Gibson’s classic trilogy <i>Sprawl</i> (<i>Neuromancer</i>, <i>Count Zero</i>, <i>Mona Lisa Overdrive</i>). While I’m more fond of his new books (<i>The Peripheral</i> / <i>Agency</i>), <i>Neuromancer</i>, published in 1984, is really remarkable. Gripping, and it’s hard to overstate how much Gibson has captured and shaped the overall nerd imagination. It’s no accident that kids who grew up on <i>Sprawl</i> are now molding the tech companies to their preference. Perhaps a shame though that we seem to have missed the fact those books were meant to show a dystopia, not a preferred path forward.</p>
<p>In August, I went on a kind of a media refuge. Not quite a hermitage, I was nevertheless secluded in Bieszczady mountains, mostly offline, time dedicated to hiking - and books. I devoured the last three parts of the <i>Expanse</i> series (not quite true now - the final book was published this December and I look forward to reading it). The James S.A. Corey writing duo has created something remarkable. The vision on screen is one thing, the details of the written story have a different pacing to them - but the overall result is incredible just the same. One of those books that can be emotionally engaging enough that I sometimes need time away from them, to not amplify the stress of daily life - but that made them perfect reading for the leisurely summer holiday time.</p>
<p>I’ve only really picked up audiobooks recently. Of the non-fiction books this year, most I have listened to - either on dog walks or while driving. Unfortunately I’m not the type of a person who can listen to a book and simultaneously concentrate on work. However, adding audiobooks to my walks increased my overall book consumption quite a lot - I’ve finished 36 books overall this year, the most since I’ve started keeping track.</p>
<p>Out of those, the most impactful probably was Akala’s reading of his own <i>Natives: Race and Class in the Ruins of Empire</i>. He’s got a brilliant voice, full of personality, and his writing is very engaging as well. The book touches on hard topics the UK museums still do their best to stay clear of, digging into the history and impact of slavery and the class system. From my central-European perspective, the passages on Ham and the biblical justifications of oppression were especially interesting.</p>
<p>The English biblical Ham translates to Cham in Polish. The appetite kindled by Akala led me to two books focusing on history more local to my origins: <i>Chamstwo</i> by Kacper Pobłocki and <i>Ludowa Historia Polski</i> by Adam Leszczyński. Somewhat surprisingly (at least to myself), I was mostly blind to the impact the class society of XIX still has on the supposedly class-less XXI century culture and social norms. Those books serve as a harsh awakening. Kacper Pobłocki focuses more on the culture side, while Adam Leszczyński reanalises the history of Poland from the point of view of the vast majority of its population. It is not a pleasant picture, but the books are well worth reading.</p>
<p>Similar train of thought led me to finally reading <i>The Theory of the Leisure Class</i> by Thorstein Veblen. Over 120 years old and rather lengthy, I’d probably struggle to get through it if not for the audiobook version. Thorstein observations at the end of the era of aristocracy (even when it wasn’t yet feeling its nearing demise, or at least, downfall) are still applicable today. His views on race and gender unsurprisingly are really outdated, but what stands out are the parts that do not change, starting with how the moneyed groups value tradition over human life.</p>
<p>Veblen’s book neatly tied into a very modern one I’ve followed it with: <i>Capital Without Borders: Wealth Managers and the One Percent</i>, by Brook Harrington. In the aftermath of the Great Recession in 2008, Harrinton trained for two years as a wealth manager and then continued her academic research for another six, documenting the off-shore and transnational nature of modern wealth. The book is as much a fascinating and morbid view into the modern upper class as it is a villain’s manual.</p>
<p>Then, prompted by a Freakonomics podcast episode, I’ve read <i>Nudge: the Final Edition</i> by Richard H. Thaler and Cass R. Sunstein. 14 years since the first edition, governments all over the world (or at least the English-speaking ones) embraced its approach.However, I’m mostly reading it from an tech professional point of view, and it’s brilliant enough to be a required training: pointing out how practically every decision has an impact on the user behaviour, and needs to be considered from their point of view.</p>
skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0Zielona Gora, Poland51.9356214 15.506186223.625387563821157 -19.650063799999998 80.245855236178841 50.6624362tag:blogger.com,1999:blog-4782981620644831943.post-1350935727761333642021-03-15T08:30:00.004+00:002021-03-15T08:33:54.937+00:00A year went by<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbXEEIV1B76Z5XK72H2koW4GnQSRsh0P_NrC5bljSLLOr_w4h69oEsfQYjAtM8JHumN21m1b5u7BH_IFqHnIgwRpwuV_B6vJofMWKLKADu224-HgQLmELNGNAtoMAQ9eM0X-zah6e10kk/s4032/PXL_20210314_113818280.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="3024" data-original-width="4032" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbXEEIV1B76Z5XK72H2koW4GnQSRsh0P_NrC5bljSLLOr_w4h69oEsfQYjAtM8JHumN21m1b5u7BH_IFqHnIgwRpwuV_B6vJofMWKLKADu224-HgQLmELNGNAtoMAQ9eM0X-zah6e10kk/s400/PXL_20210314_113818280.jpg"/></a></div>
<p>It’s been exactly a year ago that I’ve posted about the pandemic for the first time. In retrospect, the numbers that were frightening then look positively optimistic. In many countries, the first peak - the Spring 2020 one - does not even register when looking at a full year history. The third peak has started in full swing in multiple places, and it might be worse than the second one. Poland is well on the way to topping the infection numbers from November 2020, though we’re not hitting comparable daily death counts yet. Potentially the vaccination campaign has at least managed to remove the most vulnerable from the pool.</p>
<p>Poland now has 4.5 million people vaccinated, versus 1.9 million who’ve contracted the disease. Worldwide, it’s 355 million vaccinated versus 120 million recorded cases. Certain threshold has been reached, but it will be the end of 2021 before wealthy countries are done with the vaccination drive, and potentially even 2023 until the whole world has achieved a reasonable level of immunity. Global deaths are estimated at 2.65 million, but likely to be severely underreported.</p>
<p>In terms of global impact, this definitely is a generation-defining event, as was <a href="https://skolima.blogspot.com/2020/03/pandemic.html">predicted a year ago</a>. In local terms, I see people I know dealing in very different ways. Burn out and anxiety are through the roof, as the unending unpredictability of the circumstances takes a toll. Statistically, I’m aware that birth rates have taken a heavy hit, but among my close family, this seems to have been the year to have offspring in multitudes.</p>
<p>Personally, I’ve tried to leave most social media and newspapers behind, as news was causing me a significant amount of stress I could not resolve in any way. Winter darkness has also taken its usual toll. We have hardly met with any friends or family the past year. At times I question the sanity of my own choice, when I hear about elder relatives entertaining 10+ guests, indoors. Only weeks away from being vaccinated, this feels like a particularly unreasonable behaviour. Though we have discussed this so many times that I have no hope left of getting through to them.</p>
skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-60891858147465946522020-11-25T18:21:00.001+00:002020-11-25T18:21:54.202+00:00Death<p>Globally, at least 59 million have been infected. At least 1.4 million have died. In Europe, almost 17 million infected, 383 thousand dead. In Poland, 876 thousand infected, 13 thousand dead. I find it difficult to believe the Polish figures, though, as positive test ratio in Poland has been well above 30% for the last two months. Some days, over 60%. Some districts, over 100% for up to a week. WHO recommends aiming for positive test ratio under 3% to maintain an overview of the situation. Any time discrepancy is found in Polish official data, it is resolved by no longer reporting unconsolidated data points. In September and October, only patients exhibiting 4 symptoms of COVID-19 simultaneously (high fever, difficulty breathing, cough, loss of taste and smell) were being tested. Asymptomatic patients, or patients with mild symptoms, are not being tested at all. Wait time for test results stands currently at 3.5 days on average. As a matter of policy, patients not diagnosed before death are not being tested. Tests made commercially are not included in public statistics. Officially, Poland is through the peak of the second wave, but this does feel like artificially generated optimism, created by severely limiting the number of tests being conducted. Excess mortality metric keeps rising fast, currently highest in Europe, at 86%.</p>
<p>COVID-19 mortality, across the whole world population: 0.02%. Mortality across the population of Europe: 0.05%. Mortality of confirmed COVID-19 cases aged under 40: under 0.5%. Mortality of confirmed cases, across various pre-existing health conditions: under 10%. The probability wave collapses when observing a singular point.</p>
<p>You develop fever, 39 °C, and shortness of breath. A day later, your partner loses the sense of smell (WHO reports: loss of smell is correlated with a milder form of the disease). Your partner tries to get a phone consultation with your registered family doctor, but it is difficult over the weekend, and neither of you manifests the full set of symptoms required to qualify for COVID-19 test. Friends hunt for available pulse oximeters online, two get delivered on Monday. At no point do they show blood oxygenation over 90%. Rescue services visit several times per day over the course of the week, suggesting auxiliary oxygen treatment at home. There are no available places at the city hospital. Someone delivers compressed oxygen canisters, someone else orders an oxygen concentrator device. Oxygen prices, both online and in pharmacies, are now at plainly absurd levels. Around mid-week, you finally get tested. 7 days from developing symptoms, SpO2 barely hovers over 80%. Oxygen inhalations provide a brief respite. On the 8th day, test results confirm COVID-19. During the night, you get admitted to A&E. Your child bursts into tears in the morning, as they did not get to say goodbye when you were taken in. Someone perished at the hospital that night, so on Saturday you get moved to the freed place in the isolation ward.</p>
<p>10 days from the infection is considered a threshold date - mild cases ten to recover by then. Prognosis worsens for those who don't. Your partner is still in quarantine, but feeling much better by now. You aren't.</p>
<p>16 days after symptoms started, you get a call from health services - they are interested in conducting a tracing interview. You tell them you are in the isolation ward, find it hard to talk, can't really talk in multiple sentences. Ask them to call when you can. Your blood oxygen saturation struggles to climb over 80% while breathing concentrated oxygen. Mortality of cases with SpO2 still under 90%, after 10 days on oxygen: 40%.</p>
<p>20 days since the symptoms started, SpO2 67% in the morning. You get moved to the intensive care unit. High-pressure oxygen administered.</p>
<p>21 days. SpO2 again critically low. Intubation. Attached to the ventilator ("respirator"). Mortality of cases on forced ventilation, best case scenario: over 90%.</p>
<p>When faced with a problem, I tend to look at the world through numbers. It helps me put things into perspective, develop plans, propose actions. I know the numbers. I have read the WHO reports, the relevant medical studies. There is nothing I can do. I do not tell the numbers to anyone.</p>
<p>My friend died from pulmonary embolism last week, shortly after intubation. The city of Zielona Góra reported no COVID-19 deaths for the whole 7 day period.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-72421388466046074242020-09-29T10:18:00.004+01:002020-09-29T10:18:37.703+01:00One million<p>Total deaths worldwide have crossed over one million, half a year (and a few days) after Europe had the emergency lockdown. 20% of those deaths have been in the USA. Total infected, while much harder to estimate, stands at around 33 million, with about 10 million of them currently sick.</p>
<p>It's really hard to comment on those numbers.</p>
<p>Locally, Poland crossed one thousand diagnosed per day a week ago and hasn't dropped below that daily threshold since. There's fewer deaths than in the first peak, but not by much.</p>
<p>A lot of the initial epidemiologists’ predictions are still holding: September - October is on track to be a second peak in infections; there’s multiple vaccines in trials, near the end of 2020, but none of them are likely to be globally distributed before the end of 2021. No real return to work from the office for those who can afford remote work. What crowds have been calling “expert scaremongering” turned out to be just expert knowledge.</p>
skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-47946036404381890252020-06-17T16:52:00.000+01:002020-06-17T16:52:34.295+01:00#BLM<p>Looks like some countries decided to pretend the pandemic is over, and all is going peachy, time to reopen. UK and USA being the main developed examples - still riding strong on the first wave. Poland is, in an odd way, in this camp as well - roughly constant number of infections daily (well within health service capacities), but not managing to get a drop - and deciding to open up anyway. People got to earn money somehow, the saying goes.</p>
<p>And in terms of people earning money, the US statistics/predictions are that 30% of the companies that closed down will not, in any shape, recover. What's more, about a third of people who lost jobs because of coronavirus layoffs, are not going to be re-employed in the same jobs, ever - those positions will be lost, or automated. Which is in line with what was seen after 2007/2008 - still, it's grim news for those affected. The initial wave of stimulus money runs out soon, what happens next?</p>
<p>Three weeks ago we saw how most serious protests and revolutions start. It's not enough to be oppressed, people put up with a lot. Get used to it. And - in words of Black celebrities - the racism in the USA isn't getting worse, it's just that it gets documented more. Still, this wasn't enough to blow the fuse. What ignited the truly massive protests was being scared, on the basic level, about putting food on the table. About meeting your very basic needs. Those conditions had pushed thousands of revolutionary movements before, they've also pushed BLM this time, for the people in USA (and some other countries) to take to the streets in hundreds of thousands.</p>
<p>Will things change? The administration is even more openly aligning itself with KKK; the dog whistles are more like fog horns now. Though it's worth to remember, that by demographics numbers, the previous presidential elections were already ones where GOP would not have won in a democratic country with proportional representation, those numbers have moved even more in the four years of Trump's first term. Various methods of voter suppression can only go so far - at some stage, GOP would lose the presidency - will Trump actually leave the White House?</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-21592783609964551652020-05-06T11:09:00.000+01:002020-06-17T16:29:29.270+01:00Month and a half<p>Current death count stands at about quarter of a million.</p>
<p>The first peak seems to be over in Europe, countries are heading towards easing up the restrictions - with an eye towards a second (hopefully less tragic) peak in summer. The outliers, who tried out unorthodox strategies - UK, Sweden, USA. Well, UK now has the most deaths of all EU countries. Sweden has four times the mortality ratio of Norway or Finland. USA...</p>
<p>USA is still before the peak. There's now talk about "stabilising" at 3000 deaths per day. Which is about the total world death count at the moment. States are already opening up, with protesters demanding end to social distancing measures.</p>
<p>The "month and a half" seems to be the point where most "stable" companies are running out of liquid cash. Mass layoffs are likely to happen at the May/June boundary, unless there's either a heavy government intervention or business can start again.</p>
<p>I'm starting my third week of the unplanned holiday leave. My bread baking is getting much better (all the supply shortages seem to be over, confirming that they were mostly due to panic buying in March) - even though the last loaf was a real "dwarven bread" offensive weapon grade one. But I do know <i>why</i> it came out like this, and can improve. There's some gardening work, some house improvements, a bit of open source. In general, time is passing slowly. If this was a normal situation, it'd be a rather pleasant spring.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-41938487796675684202020-03-30T09:59:00.000+01:002020-06-17T16:29:17.610+01:00Exponential<p>In the last 7 days, number of diagnosed cases went from roughly 350k world-wide to over 700k. A lot of people are going to learn what "exponential growth" means.</p>
<p>The tourism/travel business not so much crashed as completely disappeared in the 4 working days after I wrote the previous post. Numerous corporate groups simply terminated all contractors in that week, my contract included. </p>
<p>Several countries are using "temporary" epidemic regulations to permanently erode civil liberties (UK, Hungary, Poland are the ones I'm following) - nothing like panic and emergency to get things rushed through.</p>
<p>It's not looking good.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-51859861186658212562020-03-15T10:09:00.000+00:002020-03-15T10:09:40.137+00:00Pandemic<p>I've been told recently that the best time to keep a journal is when things change rapidly. To be able to inspect one's views and perceptions, as they change. Thus, this post.</p><p>
A week ago on Saturday, Italy has just quarantined the northern regions. It seemed extreme, but on Monday already they've extended the quarantine to the whole country. Today, Sunday again, one week later, most European countries have followed. UK and Sweden were two notable outliers, until UK decided to (mostly) also accept WHO guidelines. Poland shut down international trains and flights, severely limit personal cross-border transit. USA looks extreme in their lack of response.</p><p>
We've started cancelling our holiday plans as Italy announced Lombardy quarantine. Almost everything is refunded by now, except Ryanair obviously sees no reason at all to issue refunds or cancel flights. During the week, we've also decided - together with my siblings - to not travel to my Mother's birthday. She was, in the end, celebrating with my Dad and my brother who still lives with them, without any of the other planned guests. This seemed extreme by the beginning of the week, but as Saturday came, was just "new normal". We're video-calling again, daily - something I haven't done (for non-work reasons) since my early days in London in 2013.</p><p>
Gyms, bars, restaurants, offices, cinemas, everything shut. Preemptively, so far. There's no confidence yet if this will slow down the growth enough to be meaningful, or will it only shift the peak without flattening it. UK was trying to bet on "herd immunity", but there's no consensus yet - and even some counterexamples - to whether COVID-19 can or can not re-infect.</p><p>
I'm lucky enough to be working from home, most of the time. But my business travel took me to Germany a week ago - and while right after returning, I've laughed at the suggestion of self-quarantine, by the end of the week, it was - again - normal. Random cough was inspected extremely suspiciously - is it my normal allergies, just sped up by a month? Spring has come extremely early this year, after a mild and wet winter; we've had Wild Garlic at the beginning of February, usually it would start growing at the end of March.</p><p>
Few weeks ago, SETI@Home announced shutting down their compute clients. Its "offspring", Folding@Home, is now donating most of its compute power to projects related to the ncov-19 virus analysis and vaccine work. They've just announced that with the signup spike they've experienced, they've assigned all currently available work units. Sitting at home, at least this feels like doing something to contribute. It's also heating the room up noticeably, especially when both CPU and GPU were running on full power. The electricity from solar panels is coming in handy.</p><p>
Economic impact? Well it's officially a recession now. Probably the fastest one in history, with information (and panic) flowing faster than ever, plus with zero-fees brokers that have sprouted last year. There's definitely going to be a big global impact. Travel industry got hit immediately, with Flybe going bankrupt (they were really just dangling on a lifeline before); LOT is looking how to get out of promised purchase of Condor; Norwegian airlines are down 80% at the stock market, even BA is struggling. There's public calls from the airlines to postpone or scrap planned emission taxes, as they would start from the very low current baseline of extremely limited air travel. Lot of bankruptcies and takeovers are definitely on the horizon.</p><p>
This brings me to the main point, what are likely to be the long term effects? The emission drop we're seeing is strictly temporary, and limited in time to quarantine (even though e.g. airlines are using it to retire old fleet), and is unlikely to last. Remote work and shopping and e-government (e.g. Polish government offices right now are not open to citizens in person, but are all still working) will likely stay afterwards at significantly higher levels, as the crisis is forcing them to happen - and thus showing where it is possible to continue "work as usual" without the commute. US is possibly looking at the most redefining experience of all Western countries - whereas in most, a health system reform afterwards is likely - in US it's either going to be a full scale "European style social support net" (which has been voted down by GOP this week already) or massive fatalities on the scale of several millions. This is not something - this would be comparable to fallout from their involvement in WW2. And disproportionately impacting the lower income part of the population, as the wealthy, the office workers, are the ones where it's easiest to self-quarantine and work from home. Even if controlled, if COVID-19 spreads through European population with "low" (under 1%) death rates as it has so far in South Korea, where (currently) it seems to be controlled - this still leads to an unprecedented numbers of deaths among the elderly, in turn leading to an unprecedented wealth transfer to the younger generation.</p><p>Even the most optimistic predictions are suggesting this to be a defining moment for future decades.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-30586520266356147962017-10-04T16:07:00.001+01:002017-10-04T16:07:46.381+01:00Automated installation of VS 2017 build tools<p>Visual Studio 2017 has re-done the whole installation procedure, with the goal of making what used to be very painful - preparing a UI-less build agent image for automated .NET builds - nice and simple. And fast.
Well, it's not quite there yet. So as I was reading Chris' post on <a href="https://skeltonthatcher.com/blog/using-packer-create-windows-aws-amis-declarative-build-agents/">building AMI images for TeamCity build agents with Packer</a> I was nodding along until I came to the bit where VS 2015 tools get installed. What about current tooling?</p>
<p>I wouldn't recommend using chocolatey to install it, unfortunately, even though a package is available. The new installer has a nasty habit of exiting silently (or hanging) if something is amiss - and you'll want to be able to choose your <a href="https://docs.microsoft.com/en-us/visualstudio/install/workload-and-component-ids">VS workload packages</a> which chocolatey doesn't support.</p>
<p>What can fail? The installer tends to abort if any of the folders it tries to create already exists. That's why you're likely to have more luck if you don't install the .NET 4.7 framework separately - also, any .target or task dlls that are not yet provided by the installer, should be scripted for installation later, not before. Took me a whole day to find this out.</p>
<p>The <a href="https://docs.microsoft.com/en-us/visualstudio/install/use-command-line-parameters-to-install-visual-studio">command line parameters for the installer</a> aren't too obvious either. "wait" doesn't wait unless you wrap it in a PowerShell script. "quiet" prints no diagnostics (well, duh) but "passive" displays a UI - there's no option for "print errors to the command line". If you're struggling, you'll end up re-creating your test VMs and switching between multiple runs of "passive" and "quiet" to see if things finally work. Oh and the download link isn't easy to find either (seriously, it seems to be completely missing from the documentation - thankfully, StackOverflow helps). And getting the parameters in the wrong order end up with the installer hanging.</p>
<p>The short PowerShell script that finally worked for me is:</p>
<pre>$Url = 'https://aka.ms/vs/15/release/vs_buildtools.exe'
$Exe = "vs_buildtools.exe"
$Dest = "c:\\tmp\\" + $Exe
$client = new-object System.Net.WebClient
$client.DownloadFile($Url,$Dest)
$Params = "--add Microsoft.VisualStudio.Workload.MSBuildTools `
--add Microsoft.VisualStudio.Workload.WebBuildTools `
--add Microsoft.Net.Component.4.7.SDK `
--add Microsoft.Net.Component.4.7.TargetingPack `
--add Microsoft.Net.ComponentGroup.4.7.DeveloperTools `
--quiet --wait"
Start-Process $Dest -ArgumentList $Params -Wait
Remove-Item $Dest</pre>
<p>Is it faster than the VS 2015 installation? Not really, the old one had an offline version you could pre-load, the new one is completely online (if you re-run it you'll get newer components!). And with VS15, a t2.micro instance was enough to run the AMI creation job - this one needs a t2.medium to finish installation in a reasonable amount of time. At least it includes most of the things that were missing before (still waiting for dotnetcore-2.0 to be included).</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-48395753823325730642014-03-31T22:38:00.000+01:002014-03-31T22:38:14.245+01:00Using zram for memory compression on Gentoo<p>After reading an excellent compression on LWN about <a href="http://lwn.net/Articles/545244/">memory compression in Linux kernel</a> and learning from a Google engineer that they employ zram to increase their workstation available memory (on top of the installed physical 48 GB ...), I've decided to give it a go. There are currently three different algorithms for memory compression being trialed in the Linux kernel, of those, zram is the simplest, but also the most mature - it's also battle tested, as it is enabled by default e.g. on Google Chromebooks. It's also available as an option in Android 4.4</p>
<p>zram works by presenting itself to the kernel as a swap device, while it is in fact backed by RAM. It has a fixed compression ratio of 50% (or, to be more exact, swapped out pages are either stored two-in-one for actual RAM page used or one-in-one if for some reason they don't compress). This simplifies access, keeping page offsets predicatable. A recommended configuration reserves up to 100% of physical RAM for compressed access - this memory will be released back when the memory pressure subsides. This also assumes the pessimistic scenario of incompressible pages - in practice, the zram devices should not take much more than 50% of their advertised capacity, resulting in 150% potential memory load before swapping would need to occur.</p>
<p>Configuration starts with enabling the kernel module:</p>
<pre>Device Drivers --->
[*] Staging drivers --->
<M> Compressed RAM block device support</pre>
<p>This is done as a module, so that configuration can be easily changed via <code>/etc/modprobe.d/zram.conf</code>:</p>
<pre>options zram num_devices=3</pre>
<p>I've got the module set to auto-load via <code>/etc/modules-load.d/zram.conf</code> containing just a single line:</p>
<pre>zram</pre>
<p>Also needed is an entry for udev telling it how to handle zram block devices and setting their size (in <code>/etc/udev/rules.d/10-zram.rules</code>):</p>
<pre>KERNEL=="zram[0-9]*", SUBSYSTEM=="block", DRIVER=="", ACTION=="add", ATTR{disksize}=="0", ATTR{disksize}="2048M", RUN+="/sbin/mkswap $env{DEVNAME}"
</pre>
<p>And the last step is an <code>/etc/fstab</code> entry so that those block devices are actually used:</p>
<pre>/dev/zram0 none swap sw,pri=16383 0 0
/dev/zram1 none swap sw,pri=16383 0 0
/dev/zram2 none swap sw,pri=16383 0 0
/dev/zram3 none swap sw,pri=16383 0 0
</pre>
<p>I've seen guides recommending creation of <code>ext4</code> volumes on zram devices for temporary folders. I would not advise that. Instead, create a standard <code>tmpfs</code> volume, with the required capacity, which will result in better performance - as unused zram device will release the memory back to the kernel.</p>
<p>I've been using this setup since November and haven't had any issues with it. I highly recommend enabling this on your workstation as well - after all, there's no such thing as too much RAM.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-44804265062033827802013-06-20T00:16:00.000+01:002019-01-12T14:04:02.491+00:00Przelewy zagraniczne do Polski i wymiana walutA teraz coś bez związku z programowaniem. Jeśli pracujesz za granicą albo spłacasz kredyt walutowy (euro/franki/etc.) to temat pewnie jest Ci znany: masz pieniądze na koncie w innym kraju (albo tylko w obcej walucie) i potrzebujesz je przelać do Polski lub przekonwertować, płacąc oczywiście jak najmniejszą prowizję. Wypróbowałem już kilka wariantów, więc przedstawię pokrótce metodę, którą aktualnie stosuję oraz parę alternatyw.<br />
<a href="https://secure.flickr.com/photos/pfala/2397388906/"><img alt="Tons of money by Paul Falardeau" border="0" height="398" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8AadtJDW0-9CKiE_AdOYpBZLJ6khIWetg5e6P8zbT88Xb2jkSf0N1RQu2b7_wk8oOGkNu5aMLPy94U3PqRtwNfg7ICVM8CDpU1W9WfFYWJ-9DTP_uhp_u3thSdlnjgOgOrg8gcz7nySM/s530/2397388906_0febd8a757_o.jpg" width="530" /></a>
<br />
Zacznę od sytuacji najprostszej, czyli kredytu walutowego: od września 2011 banki działające w Polsce muszą za darmo udostępnić klientom rachunek techniczny umożliwiający spłatę rat bez płacenia spreadu. A ten, w zależności od banku, potrafił dojść i do 6%. Tak, tyle dodatkowo oddajesz swojemu bankowi, jeśli pozwalasz mu, żeby wymieniał walutę za Ciebie. Dwa lata temu, kiedy <a href="https://pl.wikipedia.org/wiki/KNF">KNF</a> wymusił na bankach zmiany, internet wysypał kantorami online, pozwalającymi wymieniać pieniądze po kursie zbliżonym do rynkowego, z minimalną prowizją. Ze sprawdzonych przeze mnie najlepszą ofertę (prawdopodobnie ze względu na największe obroty) ma <a href="https://www.walutomat.pl/">Walutomat</a>, założony w Poznaniu przez byłych pracowników Allegro. Firma zarejestrowana jest jako kantor i podlega takiemu samemu nadzorowi Ministerstwa Finansów jak fizyczne punkty wymiany walut, co jak dla mnie jest wystarczającym uwierzytelnieniem. Prowizja za wymianę wynosi 0.2%, wpłaty i wypłaty do większości dużych banków są darmowe. Korzystając z usług banku, w którym Walutomat ma swoje konto, zazwyczaj wymiana pieniędzy zajmuje około 4 godzin, z czego większość czasu to oczekiwanie na zaksięgowanie przychodzącego przelewu przez bank. Serwis wszystkie operacje potwierdza kodami SMS, w ten sam sposób może też informować o realizacji zleceń i otrzymaniu przelewu. Jedynym mankamentem jest nieobsługiwanie przelewów zagranicznych (wymagania KNF).<br />
Jeśli często zdarza Ci się robić zakupy w zagranicznych serwisach internetowych, warto założyć kartę płatniczą do konta walutowego (oferuje taką np. <a href="http://www.aliorbank.pl/pl/klienci_indywidualni/konta_osobiste/konto_walutowe/produkty_towarzyszace">Alior</a>) i zamiast zostawiać przewalutowanie w gestii Visa/Mastercard (ok. 4% prowizji), wykonywać je samemu.<br />
Wariant drugi: pracujesz poza granicami Polski, ale w <a href="https://pl.wikipedia.org/wiki/Strefa_euro">strefie euro</a>. Sytuacja właściwie taka sama, jak w wariancie pierwszym, bo dzięki <a href="https://pl.wikipedia.org/wiki/SEPA">SEPA</a> przelewy w obrębie Unii są bezpłatne (praktycznie - przepisy wymagają, by były "nie droższe niż lokalne"). Potrzebne Ci będzie konto walutowe w polskim banku (każdy porządny prowadzi takie za darmo). Ot i cała filozofia - przelew SEPA powinien być zaksięgowany następnego dnia roboczego, co czasem prowadzi do absurdów, bo potrafi dojść szybciej niż lokalny (Irlandia, khem, khem). Co do wymiany walut, odsyłam znów do <a href="https://www.walutomat.pl/">Walutomatu</a>, nie znalazłem tańszej alternatywy. Zdecydowanie nie wykonuj przelewu walutowego na konto prowadzone w złotówkach, bo bank skasuje do 10% za wymianę.<br />
Najciekawiej (czytaj: najbardziej upierdliwie) robi się, gdy pracujesz poza strefą euro, np. w Wielkiej Brytanii. <i>Splendid isolation</i> i te sprawy. Przelewy zagraniczne z UK są drogie, w okolicach 20£ (albo i więcej, zależnie od banku). <a href="https://zagranica.bzwbk.pl/przelewy-z-zagranicy/wielka-brytania-uk/tanie-przelewy-z-wielkiej-brytanii-uk-do-polski.html">WBK udostępnia tańsze przelewy</a> (2.5£), ale z niskimi limitami (750£) i koniecznością noszenia gotówki na pocztę. Dużo z tym zachodu. Z drugiej strony, jeśli zarabiasz naprawdę dużo, to <a href="http://www.citibank.co.uk/personal/banking/international/globtransfer.htm?icid=TX-SERVICE-hpproductlink-FX-INT-HPPRODLINK-12012013">Citi oferuje darmowe przelewy</a> między swoimi placówkami w dowolnych krajach - ale każe sobie sporo płacić za konto które nie wykazuje wystarczających miesięcznych wpływów (1800£ + 2 <i>Direct Debit</i> w Wielkiej Brytanii / 5000 zł w Polsce). Najwygodniejszym rozwiązaniem, jakie dotychczas znalazłem, jest <a href="http://transferwise.com/u/c76b">TransferWise</a> (link z moim identyfikatorem polecającego, pierwsza wymiana bez prowizji). Normalna prowizja jest wyższa niż w Walutomacie, bo 0.5% (minimum 1£), ale brak opłaty za przelew znacząco zwiększa jej atrakcyjność. Z prostych obliczeń wychodzi, że poniżej 6500£ w jednej transakcji tańszy będzie <a href="http://transferwise.com/u/c76b">TransferWise</a> (zakładając 20£ za przelew). Teoretycznie wymiana może trwać do 4 dni roboczych, moje jak dotąd realizowane były w około 4 godziny (od przelewu w Wielkiej Brytanii do wpłynięcia pieniędzy na konto w Polsce). Firmę założył pierwszy pracownik Skype; siedzibę mają w Shoreditch, wylęgarni londyńskich start-upów, są też zarejestrowani w brytyjskim <i>Financial Services Authority</i> jako pośrednik w międzynarodowym transferze pieniędzy. Od czerwca oferuje też alternatywę dla pobierania płatności (usługi typu PayPal) pod nazwą GetPaid.<br />
Podsumowując te wszystkie opcje w jednym akapicie: jeśli potrzebujesz wymienić waluty w Polsce - skorzystaj z <a href="https://www.walutomat.pl/">Walutomatu</a>. Jeśli chcesz przekazywać pieniądze do/z Wielkiej Brytanii albo Stanów Zjednoczonych, skorzystaj z <a href="http://transferwise.com/u/c76b">TransferWise</a>.skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-29312343403820086832013-05-19T19:09:00.000+01:002013-05-19T19:09:33.434+01:00SquashFS Portage tree saving time and space<p>Gentoo Portage, as a package manager, has one annoying side-effect of using quite a lot of disk space and being, generally, slow. As I was looking to reduce the number of small file writes that <code>emerge --sync</code> inflicts on my SSD, I've came back to an old and dusty trick - keeping your portage tree as a SquashFS file. It's much faster than the standard setup and uses less (76MB vs almost 400MB) disk space. Interested? Then read on!</p>
<a href="https://secure.flickr.com/photos/-oliviabee-/1314987998/" imageanchor="1" ><img width="530" height="378" border="0" alt="Squashes by Olivia Bee" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPV9zWG39yrjtmOHLmHeCK6o-_SVR17nz_85tIe_Alm91bf6bm0_w-M4SrvKKSHM9Q73xlyVeVq0SVmUbHfZFVCsQ7tj5mDzKxA1ZlNn7VrQeaAohvmNuR6LQxneKQZJzasuGSONI4TOo/s530/1314987998_fe2a91e744_b.jpg" /></a>
<p>Requirements:</p>
<ul><li>SquashFS enabled in the kernel: <code>File systems -> Miscellaneous filesystems -> <M> SquashFS 4.0 - Squashed file system support</code> and <code>[*] Include support for ZLIB compressed file systems</code></li>
<li>Installed <code>sys-fs/squashfs-tools</code></li>
<li>Distfiles moved out of the portage tree, e.g. (in <code>/etc/portage/make.conf</code>): <code>DISTDIR="/var/squashed/distfiles"</code></li></ul>
<p>I'm also assuming that your <code>/tmp</code> folder is mounted as <code>tmpfs</code> (in-memory temporary file system) since one of the goals of this exercise is limiting the amount of writes <code>emerge --sync</code> inflicts on the SSD. You are using an SSD, right?</p>
<p>You will need an entry in <code>/etc/fstab</code> for <code>/usr/portage</code>:</p>
<pre>/var/squashed/portage /usr/portage squashfs ro,noauto,x-systemd.automount 0 0</pre>
<p>This uses a squashed portage tree stored as a file named <code>/var/squashed/portage</code>. If you are not using <a href="http://wiki.gentoo.org/wiki/Systemd">systemd</a> then replace <code>ro,noauto,x-systemd.automount</code> with just <code>ro,defaults</code>.</p>
<p>Now execute <code>mv /usr/portage/ /tmp/</code> and you are ready to start using the update script. Ah yes, forgot about this part! Here it is:</p>
<pre>#!/bin/bash
# grab default portage settings
source /etc/portage/make.conf
# make a read-write copy of the tree
cp -a /usr/portage /tmp/portage
umount /usr/portage
# standard sync
rsync -avz --delete $SYNC /tmp/portage && rm /var/squashed/portage
mksquashfs /tmp/portage /var/squashed/portage && rm -r /tmp/portage
mount /usr/portage
# the following two are optional
eix-update
emerge -avuDN system world
</pre>
<p>And that's it. Since the SquashFS is read only, this script needs to first make a writeable copy of the tree (in theory, this is doable with UnionFS as well, but all I was able to achieve with it were random kernel panics), then updates the copy through rsync and rebuilds the squashed file. Make sure you have a fast rsync mirror configured.</p>
<p>For me, this decreased the on-disk space usage of the portage tree from over 400MB to 76MB, cut the sync time at least in half and made all <code>emerge/eix/equery</code> operations much faster. The memory usage of a mounted tree will be about 80MB, if you really want to conserve RAM you can just call <code>umount /usr/portage</code> when you no longer need it.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-8566418306386342982013-04-08T22:23:00.001+01:002013-04-08T22:53:48.751+01:00Designing a public API<p>Disclaimer: a bit over a month ago I've joined the API team at <a href="http://www.7digital.com/">7digital</a>. This post was actually written before that and stayed in edit mode far too long. I've decided not to update it, but instead to publish it as it is with the intention of writing a follow-up once I have enough new interesting insights to share.</p>
<p>The greatest example of what comes from a good <em>internal</em> API that I know of is Amazon. If you're not familiar with the story the way Steve Yegge (now from Google) told it, I recommend you <a href="https://plus.google.com/112678702228711889851/posts/eVeouesvaVX">read the full version</a> (mirrored, original was removed). It's a massive post that was meant for internal circulation but got public by mistake. There's also a <a href="http://searchengineland.com/the-google-doesnt-get-platforms-family-intervention-memo-96619">good summary available</a> (still a long read). <a href="https://plus.google.com/110981030061712822816/posts/AaygmbzVeRq">Steve followed up with an explanation</a> after Google PR division learnt that he released his internal memo in public. If you're looking for an abridged version, there's a good re-cap available at <a href="http://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/">API Evangelist</a>.</p>
<a href="http://haxonite.deviantart.com/art/Motorway-Edit-51172921" imageanchor="1" ><img border="0" height="348" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvwtnlZpMNTjNnK2AxvG2geML7kIkbUp33q9Mb2txQds9vTt_Zmmph9SrMfb5Q1cm_xYUhY1UYZQtcqIJ3YiO0A6stiKnEIXxX5ezsSjyepPv5uyyqui4gTEt39YVu8hIlkZ2UVYB-sn0/s530/Motorway_Edit_by_Haxonite.jpg" alt="motorway at night by Haxonite" /></a>
<p>I'd recommend you read all of those, but preferably not right now (apart from the 2-minutes re-cap in the last link), as it would take a better part of an hour. A single sentence summary is: you won't get an API, a platform others can use, without using it first yourself, because a good API can't, unfortunately, be engineered, it has to grow.</p>
<p>So to start on the service-oriented route, you have to first take a look at how various existing services your company has interact with each other. I am sure you will find at least one core platform, even if it's not recognised as such, with existing internal consumers. It's probably a mess. Talk to anyone who worked with those projects (probably most programmers) and you'll have lots of cautionary tales how API can go wrong, especially when you try to plan it ahead and don't get everything right. And you won't, it's just not possible.</p>
<h3>Some of the lessons I've learnt so far (I'm sure others can add more!):</h3>
<ol><li>
<h3>You need to publish your API.</h3>
Last team I was with did this, sort of - we had NuGet packages (it's a .Net API, not a web one, ok?). Still, those packages contain actual implementation, not only surface interfaces, so they are prone to breaking. And they expose much more than is actually needed/should be used, so a whole lot of code is frozen in place (see 2.).</li>
<li><h3>You need a deprecation mechanism.</h3>
Your API will change. Projects change, and API needs to reflect this. It's easy to add (see next point), but how do you remove? Consumers of the API don't update the definition packages, we've had cases where removing a call that was marked as <code>[Obsolete]</code> for over a year broke existing code.</li>
<li><h3>You need to listen to feedback from consumers.</h3>
Internal consumers are the best, because you can chat with them in person. That's the theory, at least, I've seen project teams not talk to each other and it becomes a huge problem. Because of problems/lacks in API, we had projects doing terrible things like reading straight from another database or even worse, modifying it. This won't (hopefully) happen with an external consumer, but if the other team prefers to much around in your DB instead of asking for an API endpoint they need, you don't have a working API.</li>
<li><h3>Your API needs to perform.</h3>
Part of the reason for problems mentioned in 3. is that our API was slow at times. There were no bulk read/update methods (crucial for performance when working with large sets of items), we had bulk notification in the form of NServiceBus queues but it had performance problems as well. If the API is not fast enough for what it's needed for, it won't be used, it's that simple.</li>
<li><h3>You need to know how your API is used.</h3>
This last point is probably the most important. You won't know what you can remove (see 2.) or what is too slow (see 4.) if you don't have some kind of usage/performance measurement. Talk to your Systems team, I'm sure they will be happy to suggest a monitoring tool they already use themselves (and they are the most important user of your reporting endpoints). For Windows Services, <a href="http://msdn.microsoft.com/en-us/library/system.diagnostics.performancecounter.aspx">Performance Counters</a> are a good start, most administrators should already be familiar with them. Make sure those reports are visible, set up automatic alarms for warning conditions (if it's critical it's already too late to act). Part of this is also having tests that mirror actual usage patterns (we had public interfaces that weren't referenced in tests at all) - if a public feature does not have automated test then forget about it, it could as well not exist. Well unless you consider tests "we have deleted an unused feature and a month later found out another project broke" (see 2.).</li>
</ol>
<p>In summary, the shortest (although still long!) path to a useful <strong>public</strong> API is to use it <strong>internally</strong>. Consumers with a quick feedback cycle are required to create and maintain a service-oriented architecture, and there's no faster feedback than walking to your neighbour's desk.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-75025592993155964952012-12-16T21:22:00.000+00:002012-12-16T21:22:31.242+00:00SSD, GPT, EFI, TLA, OMG!<p>I finally bought an SSD, so I took the drive change as an excuse to try out some other nifty new technologies as well: UEFI and GPT. Getting them to work (along with a dual-boot operating systems - Gentoo + Windows 7) wasn't trivial so I'll describe what was required to get it all humming nicely.</p>
<p>The hardware part was easy. The laptop I have came with a 1TB 5.4k extra-slow hard drive plugged into it's only SATA-3.0 port, but that's not a problem. There's another SATA-2.0 port, dedicated to a DVD drive - why would anyone need that? I've replaced the main drive with a fast Intel SSD (450MBps write, 500MBps read, 22.5K IOPS - seriously, they've become so cheap that if you're not using one you must be some kind of a masochist that likes to stare blankly at the screen waiting for hard drive LEDs to blink), ordered a "Hard Driver Caddy" off eBay ($9 including postage, although it took 24 days to arrive from Hong Kong) and started system installation.</p>
<img border="0" height="398" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrnIwDCubmdr2AYlTW1eh0gYz6QZaHaG920vui3css09kYpKMpOCBfzW9AOicc7zWYb75j6UCOXEe8fWDFxpdTl2ZBFnJs7VOr8BM0FxYBJal1O26-efkWWaYM7qNbigj6sRZACc9rLUM/s530/IMG_20121108_211611.jpg" alt="HDD and SSD on an open laptop" />
<p>Non-chronologically, but sticking to the hardware topic: the optical drive replacement caddy comes in three different sizes (for slot drives/slim 9.5mm/standard 12.7mm) and that's pretty much the only thing you have to check before you order one. Connectors and even the masking plastic bits are standardised, so replacement operation is painless. A caddy itself weights about 35g (as much as a small candy bar), so your laptop will end a bit leaner than before.</p>
<p>DVD and an HDD in the caddy:</p>
<img border="0" height="398" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXsNA8CpD5VIQs7rtzQ6xwY2cXtWdfmaKz0Z6X0O_33O-bE8Oi_zBfy7HXZyt5sdVvcwruh9YAdEAW8vwIKFp0NpMxk5lt65GCGOqFuFYG-Q139jMKJEBdpitwns6ZPW9eNaxKupMG3jU/s530/IMG_20121127_195952.jpg" alt="DVD and HDD in a replacement caddy" />
<p>You'll want to remove the optical drive while it's ejected, as the release mechanism is electrical, and one of two hooks holding the bezel is only accessible when the drive is opened. I used a flat screwdriver to unhook it, but be careful, as the mask is quite flimsy and might break. Only a cosmetic problem, but still. Showing the hooks:</p>
<img border="0" height="398" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbVNlxXyFNibtt9z5TTzfp6bLIk8pDoqnV65TwmQXqUtPJ8pXMrgh69a39BTi3QW7X_GmWEqpsvrJ8CRbPPhXWINa37iAzD14moy0pWjnH2Ixy-Oj1knfIw0EdhsZUiKfsBXJNEwb2bXI/s530/IMG_20121127_200555.jpg" />
<p>That's pretty much everything that's needed from the hardware side - now to the software. I was following a Superuser post <a href="http://superuser.com/a/415588/5020">Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together</a>, which describes the dual boo installation procedure quite well. My first problem was that I couldn't get a UEFI boot to work from a DVD (when I still had it). Went for the easiest solution with <a href="https://help.ubuntu.com/community/Installation/FromUSBStick">Ubuntu Live USB</a> that managed to start in the UEFI mode just fine.</p>
<p>There are quite a few "gotchas" here: you can't install a UEFI system if you're not already booted into UEFI mode (check <code>dmesg</code> output for EFI messages). The starting payload needs to be 64 bit and reside on a FAT32 partition on a GPT disk (oversimplifying a bit, but those are the requirements if you want to dual-boot with Windows). A side-note for inquiring minds: you'll also need a legal copy of Windows 7/8, as the pirate bootloaders require booting in BIOS mode. Oh, and your SATA controller needs to be set to AHCI mode, because otherwise TRIM commands won't reach your SSD drive and it will get slower and slower as it fills with unneeded (deleted, but not trimmed) data.</p>
<p>Once I had Ubuntu started, I proceeded with a mostly standard Gentoo installation procedure. Make sure you do your GPT partitioning properly (see the Superuser post, although the 100MB for EFI boot partition might be too much - I have 16MB used on it and that's unlikely to change) and remember to mount the "extra" partition in <code>/boot/efi</code> before you install Grub2. <a href="http://en.gentoo-wiki.com/wiki/UEFI#Kernel_Options">Additional kernel options needed</a> are listed on Gentoo Wiki, <a href="http://wiki.gentoo.org/wiki/GRUB2#UEFI.2FGPT">Grub2 installation procedure for UEFI</a> is documented there as well. Make sure that your Linux partitions are ext4 and have the <code>discard</code> option enabled.</p>
<p>All of this resulted in my machine starting - from pressing the power button to logging onto Xfce desktop - in 13 seconds. Now it was time to break it by getting Windows installed. Again, the main hurdle proved to be starting the damn installer in UEFI mode (and you won't find out in which mode it runs until you try to install to a GPT disk and it refuses to continue because of unspecified errors). Finally I got it to work by using the USB I had created for Ubuntu, replacing all of the files on the drive with Windows Installation DVD contents <b>and</b> extracting the Windows bootloader. That was the convoluted part, because a "normal" Windows USB key will only start in BIOS mode.</p>
<ul><li>Using 7zip, open file <code>sources/install.wim</code> from the Windows installation DVD and extract <code>\1\Windows\Boot\EFI\bootmgfw.efi</code> from it.</li>
<li>On your bootable USB, copy the folder <code>efi/microsoft/boot</code> to <code>efi/boot</code>.</li>
<li>Now take the file you extracted and place it in <code>efi/boot</code> as <code>bootx64.efi</code>.</li></ul>
<p>This gave me an USB key that starts Windows installer in UEFI mode. You might want to disconnect the second drive (or just disable it) for the installation, as sometimes Windows decides to put it's startup partition on the second drive.</p>
<p>Windows installation done, I went back to Ubuntu Live USB and restored Grub2. Last catch with the whole process is that, due to some bug, it won't auto-detect Windows, so you need an entry in <code>/etc/grub.d/40_custom</code> file:</p>
<code><pre>menuentry "Windows 7 UEFI/GPT" {
insmod part_gpt
insmod search_fs_uuid
insmod chain
search --fs-uuid --no-floppy --set=root 6387-1BA8
chainloader ($root)/EFI/Microsoft/Boot/bootmgfw.efi
}</pre></code>
<p>The <code>6387-1BA8</code> identifier is the partitions UUID, you can easily find it by doing <code>ls -l /dev/disk/by-uuid/</code>.</p>
<p>Dual-booting is usually much more trouble than it's worth, but I did enjoy getting this all to work together. Still, probably not a thing for faint of heart ;-) I also have to admit that after two weeks I no longer notice how quick boot and application start-up are (Visual Studio 2012 takes less than a second to launch with a medium size solution, it's too fast to practically measure), it's just that every non-SSD computer feels glacially slow.</p>
<p>In summary: why are you still wasting your time using a hard drive instead of an SSD? Replace your optical drive with a large HDD for data and put your operating system and programs on a fast SSD. The hardware upgrade is really straightforward to do!</p>
skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-54624327442254854792012-09-30T23:52:00.000+01:002012-09-30T23:54:33.915+01:00Handling native API in a managed application<p>Although Windows 8 and .NET 4.5 have already been released, bringing WinRT with them and promising the end of P/Invoke magic, there's still a lot of time left until programmers can really depend on that. For now, the most widely available way to interact with the underlying operating system from a C# application, when the framework doesn't suffice, remains P/Invoking the Win32 API. In this post I describe my attempt to wrap an interesting part of that API for managed use, pointing out several possible pitfalls.</p>
<a href="http://www.flickr.com/photos/19779889@N00/4398186065/"><img border="0" height="351" width="500" alt="rusted gears" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjWbNWzuGU5-EKbXvonTMBVbyV2vivLpeXUr0_Y_GBAQ4ICA9JEPmvSBscyhm3OBfXtYPy83Q0wOfT5JIdiyWzYG-QIik-4E0sM2PteGPnQHiInmwYBFw6gzpaCnZzRsoSqjU1CWiJpQQ/s800/gears.jpg" /></a>
<p>Lets start with a disclaimer: almost everything you need from your .NET application is doable in clean, managed C# (or VisualBasic or F#). There's usually no need to descend into P/Invoke realms, so please consider again if you really have to break from the safe (and predictable) world of the Framework.</p>
<p>Now take a look at one of the use cases where the Framework does not deliver necessary tooling: I have an application starting several children processes, which may in turn start other processes as well, over which I have no control. But I still need to turn the whole application off, even when one of the grandchild processes breaks in a bad way and stops responding. (If this is really your problem, then take a look at <a href="https://github.com/ccnet/CruiseControl.NET/blob/master/project/core/util/KillUtil.cs">KillUtil.cs</a> from <a href="http://cruisecontrolnet.org/projects/ccnet">CruiseControl.NET</a>, as this way ultimately what I had to do.)</p>
<p>There is a very nice mechanism for managing child processes in Windows, called Job Objects. I found several partial attempts of wrapping it into a managed API, but nothing really that fitted my purpose. An entry point for grouping processes into jobs is the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/ms682409.aspx">CreateJobObject</a> function. This is a typical Win32 API call, requiring a structure and a string as parameters. Also, meaning of the parameters might change depending on their values. Not really programmer-friendly. There are a couple of articles on how the native types map into .NET constructs, but it's usually fastest to take a look at <a href="http://www.pinvoke.net/">PInvoke.net</a> and write your code based on samples there. Keep in mind that it's a wiki and examples will often contain errors.</p>
<p>What kind of errors? For one, they might not consider 32/64 bit compatibility. If it's important to you then be sure to compile your application in both versions - if your P/Invoke signatures aren't proper you'll see some ugly heap corruption exceptions. Other thing often missing from the samples is error checking. Native functions do not throw exceptions, they return status codes and update the global error status, in a couple of different ways. Checking how a particular function communicates failure is probably the most tricky part of wrapping. For that particular method I ended up with the following signature:</p>
<pre>[DllImport("kernel32", SetLastError = true, CharSet = CharSet.Auto)]
private static extern IntPtr CreateJobObject(IntPtr lpJobAttributes, string lpName);</pre>
<p>Modifiers <code>static extern</code> are required by P/Invoke mechanism, <code>private</code> is a good practice - calling those methods requires a bit of special handling on the managed side as well. You might also noticed that I omitted the <code>.dll</code> part of the library signature - this doesn't matter on Windows, but Mono will substitute a suitable extension based on the operating system it's running on. For the error reporting to work, it's critical that the status is checked as soon as the method returns. Thus the full call is as follows:</p>
<pre>IntPtr result = CreateJobObject(IntPtr.Zero, null);
if (result == IntPtr.Zero)
throw new Win32Exception();</pre>
<p>On failure, this will read the last reported error status and throw a descriptive exception.</p>
<p>Every class holding unmanaged resources should be <code>IDisposable</code> and also include proper cleanup in it's finalizer. Since I'm only storing an <code>IntPtr</code> here I'll skip the finalizer, because I might not want for the job group to be closed in some scenarios. In general that's a bad pattern, it would be better to have a parameter controlling the cleanup instead of "forgetting" the <code>Dispose()</code> call on purpose.</p>
<p>There's quite a lot of tedious set-up code involved in job group control that I won't be discussing in detail (it's at the end of this post if you're interested), but there are a couple of tricks I'd like to point out. First, and pointed out multiple times in P/Invoke documentation (yet still missing from some samples) is the <code>[StructLayout (LayoutKind.Sequential)]</code> attribute, instructing the runtime to lay out your structures in memory exactly as they are in the file. Without that padding might be applied or even the members might get swapped because of memory access optimisation, which would break your native calls in ways difficult to diagnose (especially if the size of the structure would still match).</p>
<p>As I mentioned before, Win32 API calls often vary their parameters meaning based on their values, in some cases expecting differently sized structures. If this happens, information on the size of the structure is also required. Instead of manual counting, you can rely on <code>Marshal.SizeOf (typeof (JobObjectExtendedLimitInformation))</code> to do this automatically.</p>
<p>Third tip is that native flags are best represented as enum values and OR'ed / XOR'ed as normal .NET enums:</p>
<pre>[Flags]
private enum LimitFlags : ushort
{
JobObjectLimitKillOnJobClose = 0x00002000
}</pre>
<p>Wrapping unmanaged API often reveals other problems with it's usage. In this case, first problem was that Windows 7 uses Compatibility Mode for launching Visual Studio, which that wraps it and every program started by it in a job object. Since a job can't (at least not in Windows 7) belong to multiple groups, my new job group assignment would fail and the code would never work inside a debugger. As usual, StackOverflow proved to be helpful in <a href="http://stackoverflow.com/q/89791/3205">diagnosing and solving this problem</a>.</p>
<p>However, my use case is still not fulfilled: if I add my main process to the job group, it will be terminated as well when I close the group. If I don't, then a child process might spin off children of its own before it is added to the group. In native code, this would be handled by creating the child process as suspended and resuming it only after it has been added to the job object. Unfortunately for me, turns out that <a href="http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start.aspx"><code>Process.Start</code></a> performs a lot of additional set-up that would be much too time consuming to replicate. Thus I had to go back to the simple KillUtil approach.</p>
<p>I've covered a couple of most common problems with calling native methods from a managed application and presented some useful patterns that make working with them easier. The only part missing is the complete wrapper for the API in question:</p>
<script src="https://gist.github.com/3808452.js?file=JobObject.cs"></script>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com2tag:blogger.com,1999:blog-4782981620644831943.post-26828681184821415252012-08-31T01:14:00.000+01:002012-08-31T01:14:24.036+01:00Dynamic log level with log4net<p>Out of all the features of <a href="http://logging.apache.org/log4net/">log4net</a>, the most useful and the least known at the same time is the possibility for the logger to dynamically change the logging level based on future events. Yes, future! Nothing like a little clairvoyance to produce clean and usable log files.</p>
<p>log4net can buffer incoming events and, when an error occurs, write out the sequence of actions that lead to it - and if nothing wrong happens, then the excessive messages are dropped. The class that allows for that is <a href="http://logging.apache.org/log4net/release/sdk/log4net.Appender.BufferingForwardingAppender.html">BufferingForwardingAppender</a>. It wraps around another log appender (e.g. file or console or smtp or database or eventlog or whatever else you would like log4net to write to) and uses an <a href="http://logging.apache.org/log4net/release/sdk/log4net.Core.ITriggeringEventEvaluator.html">evaluator</a> to decide when to flush buffered data. Let's have a look at a sample configuration (<code>app.config</code> file):</p>
<pre>
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<log4net>
<!-- see http://logging.apache.org/log4net/release/config-examples.html for more examples -->
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
<threshold value="WARN" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%-4timestamp [%thread] %-5level %logger %ndc - %message%newline" />
</layout>
</appender>
<!-- you should use a RollingFileAppender instead in most cases -->
<appender name="FileAppender" type="log4net.Appender.FileAppender">
<file value="my_application.log" />
<!-- pattern is required or nothing will be logged -->
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%-4timestamp [%thread] %-5level %logger %ndc - %message%newline" />
</layout>
</appender>
<appender name="BufferingForwardingAppender" type="log4net.Appender.BufferingForwardingAppender" >
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR" />
</evaluator>
<bufferSize value="50" />
<lossy value="true" />
<appender-ref ref="FileAppender" />
</appender>
<!-- root is the main logger -->
<root>
<!-- default is INFO, this performs initial filtering -->
<level value="DEBUG"/>
<!-- messages are sent to every appender listed here -->
<appender-ref ref="BufferingForwardingAppender"/>
<appender-ref ref="ConsoleAppender" />
</root>
</log4net>
</configuration>
</pre>
<p>Now this is a wall of text. What is going on here?</p>
<ul>
<li><code>configSections</code> is a standard .NET <a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx">configuration section declaration</a></li>
<li>then we declare a <a href="http://logging.apache.org/log4net/release/sdk/log4net.Appender.ConsoleAppender.html">ConsoleAppender</a> that will print everything of level WARN or above to console - you can configure a ColoredConsoleAppender instead to have prettier output</li>
<li>following that is a <a href="http://logging.apache.org/log4net/release/sdk/log4net.Appender.FileAppender.html">FileAppender</a>, which simply outputs everything to a file</li>
<li>next one is the magical <code>BufferingForwardingAppender</code>, containing an evaluator that triggers for every message of level ERROR or above, a lossy buffer of size 50 (that means that when more messages are buffered, the first ones are being discarded) and a target appender that will receive messages when they are flushed</li>
<li>last element is the <code>root</code> logger, which is the default sink for all the messages - it contains referenced to our appenders and will feed messages to them</li>
</ul>
<p>So far so good. log4net now needs to be instructed to parse this configuration - my preferred way is with an assembly attribute:</p><pre>[assembly: log4net.Config.XmlConfigurator (Watch = true)]</pre><p>You can specify a file path in this attribute if you don't want to store your configuration inside <code>app.config</code>. A simple way to create a logger is just</p><pre>private static readonly log4net.ILog log = log4net.LogManager.GetLogger ( System.Reflection.MethodBase.GetCurrentMethod ().DeclaringType );
</pre><p>and we're good to go. Now all that remains is dumping some log messages into our log.</p>
<pre>for (int i = 0; i < 1025; i++)
{
log.DebugFormat("I'm just being chatty, {0}", i);
if(i%2 ==0)
log.InfoFormat("I'm just being informative, {0}", i);
if(i%20 == 0)
log.WarnFormat("This is a warning, {0}", i);
if(i%512==0)
log.ErrorFormat("Error! Error! {0}", i);
}</pre>
<p>When you execute this sample code you will see every warning and error printed to console. Contents of <code>my_application.log</code>, however, will look differently: they will contain only errors and 50 messages that were logged before the error. Now that's much easier to debug, isn't it?</p>
<p>Please also take a look at how I include parameters in the logging calls: using the <code>DebugFormat()</code> form overloads means that the strings are not formatted until this is necessary - so if a log message is suppressed, no new string will be allocated and no <code>ToString()</code> will be called. This might not change your applications performance a lot, but it's a good practice that is worth following. And one last thing to remember: log4net, by default, does not do anything. In order to get any output, you need to explicitly request it - most likely through configuration.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com1tag:blogger.com,1999:blog-4782981620644831943.post-90990888860027633862012-08-01T01:00:00.003+01:002014-09-09T09:06:26.698+01:00NuGet proxy settings<p>This post is based on code present in NuGet 2.0.</p>
<p>NuGet reads web proxy settings from three distinct sources, in order:</p>
<ul>
<li>configuration files</li>
<li>environment variables</li>
<li>current user's <i>Internet Options</i></li>
</ul>
<p>While the layout of IE's Connection Settings is probably familiar to you if you are behind a corporate firewall and require proxy configuration to access the Internet, first two options require a bit of explanation.</p>
<p>For configuration files, NuGet first considers <code>.nuget\NuGet.config</code> and then falls back to <code>%APPDATA%\NuGet\NuGet.config</code>. Relevant configuration entries are <code>http_proxy</code>, <code>http_proxy.user</code> and <code>http_proxy.password</code>. You can either edit them manually, by adding a line under <code><settings></code> node:</p>
<code><add key="http_proxy" value="http://company-squid:3128" /></code>
<p>or you can add them from NuGet command line:</p>
<code>nuget.exe config -set http_proxy=http://company-squid:3128</code>
<p>If those variables aren't found in the configuration files, NuGet will fall back to checking standard environment variables for proxy configuration. By pure concidence, the variables have the same names as the configuration options ;-) . Names are not case-sensitive, but you might have to experiment a bit until you get NuGet to properly parse your settings if you have a space where it wouldn't expect one (e.g. in your user name).</p>
<p>Finally, if you are running NuGet in your user account, not using a service account (e.g. on a continuous build server), it will simply pick up whatever you have configured in the <i>Control Panel</i> as the system proxy server. All credentials configured there (including Active Directory single sign-on mechanism) should work without any work on your part.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com1tag:blogger.com,1999:blog-4782981620644831943.post-63480535959256657612012-06-04T07:19:00.000+01:002012-06-04T07:21:04.805+01:00Why aren't C# methods virtual by default?<p>Recently, during <a href="http://2012.geecon.org/">GeeCON 2012</a> conference, I had a very interesting conversation with <a href="http://crazyjavahacking.org/">Martin Skurla</a> on differences between the .NET runtime and the Java Virtual Machine. One of the more surprising divergences is centred around the <code>virtual</code> keyword.</p>
<p>Virtual methods are one of the central mechanisms of <a href="http://en.wikipedia.org/wiki/Polymorphism_in_object-oriented_programming">polymorphic objects</a>: they allow a descendant object to replace the implementation provided by the base class with it's own. In fact, they are so important that in Java all public methods are virtual by default. even though this does carry a small runtime overhead. The virtual method dispatch is usually implemented using a virtual method table, thus each call to such a method requires an additional memory read to fetch the code address - it cannot be inlined by the compiler. On the other hand, a non-virtual method can have it's address inlined in the calling code - or even can be inlined whole, as is the case with trivial methods such as most C# properties.</p>
<p>There are several ways of dealing with this overhead: <a href="http://en.wikipedia.org/wiki/HotSpot">HotSpot JVM</a> starts the program execution in interpreted mode and does not compile the bytecode into machine code until it gathers some execution statistics - among those is information, for every method, if it's virtual dispatch has more than a single target. If not, then the method call does not need to hit the <a href="http://en.wikipedia.org/wiki/Virtual_method_table">VTable</a>. When additional classes are loaded, the JVM performs what is called a de-optimization, falling back to interpreted execution of the affected bytecode until it re-verifies the optimization assumptions. While technically complex, this is a very efficient approach. .NET takes a different approach, akin to the C++ philosophy: <em>don't pay for it if you don't use it</em>. Methods are non-virtual by default and the <a href="http://en.wikipedia.org/wiki/Just-in-time_compilation">JIT</a> performs the optimization and machine code compilation only once. Because virtual calls are much rarer, the overhead becomes negligible. Non-virtual dispatch is also crucial for the aforementioned special 'property' methods - if they weren't inlineable (and equivalent in performance to straight field access), they wouldn't be as useful. This somewhat simpler approach has also the benefit of allowing for <em>full</em> compilation - JVM need to leave some trampoline code between methods that will allow it to de-optimize them selectively, while .NET runtime, once it has generated the binaries for the invoked method, can replace (<em>patch</em>) the references to it with simple machine instructions.</p>
<p>I am not familiar with any part of ECMA specification that would prohibit the .NET runtime from performing the de-optimization step, thus not permitting the HotSpot approach to the issue (apart from the huge Oracle patent portfolio covering the whole area). What I do know is that since the first version of the C# language did not choose <em>virtual</em> to be the default, future versions will not change this behaviour - it would be a huge breaking change for the existing code. I've always assumed that the performance trade-off rationale was the reason for the difference in behaviour - and this was also what I explained to Martin. Mistakenly, as it turns out.</p>
<p>As <a href="http://www.artima.com/intv/nonvirtual.html">Anders Hejlsberg, the lead C# architect, explains in one of his interviews</a> from the begging of the .NET Framework, a virtual method is an important API entry point that does require proper consideration. From software versioning point of view, it is much safer to assume method hiding as the default behaviour, because it allows full substitution according to the <a href="http://en.wikipedia.org/wiki/Liskov_substitution_principle">Liskov principle</a>: if the subclass is used instead of an instance of the base class, the code behaviour will be preserved. The programmer has to consciously design with the substitutability in mind, he has to choose to allow derived classes to plug into certain behaviours - and that prevents mistakes. C# is on it's fifth major release, Java - seventh, and each of those releases introduces new methods into some basic classes. Methods which, if your code has a derived class that already used the new methods name - constitute breaking changes (if you are using Java) or merely compilation warnings (on the .NET side). So yes, a good public API should definitely expose as many plug-in points as possible, and most methods in publicly extendable classes should be virtual - but C# designers did not want to force this additional responsibility upon each and every language user, leaving this up to a deliberate decision.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-5592657050365501702012-05-01T13:57:00.000+01:002012-05-01T13:57:22.866+01:00Tracking mobile visitors with Google Analytics<p>I've seen some strange approaches to tracking mobile visits using Google Analytics, which is quite surprising - especially considering that this is something that Analytics does out of the box. Granted, the <em>Standard Reporting -> Audience -> Mobile</em> page does not show much, apart from mobile operating system and resolution, but there's a very nice tool that allows any report to be filtered by a custom parameter.</p>
<p>I'm not talking about <em>Profiles</em>, which, although powerful, are only applied as data is gathered, and cannot be selectively enabled and disabled for existing statistics. Advanced segments are a very mighty, yet not well known tool. They can filter any existing report (e.g. <em>Content</em>, to see what pages should be the first to get a mobile-friendly layout). Most importantly - they can be mixed and matched, to show multiple facets of your site's traffic at once:</p>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1A46Ew3IMB77ru5a3FkH08bIpolT_xFtWj1cz7jQGgyHuUK14Ttxebh-jlOCE4qFGaxgbqZysRgBdVM7FhEBgL9cONyVekDN1BFHtghyphenhyphenz8RBpb1LvYeMAgHpUrvUFX6sTwbz_tvv2N1g/s800/advanced_segments_visits_by_browser.png" alt="Visitors by browser" title="Visitors by browser"width="530" height ="512" />
<p>As today Google enabled <a href="http://analytics.blogspot.com/2012/03/share-your-custom-reports-advanced.html">custom reports and advanced segments sharing</a>, you can just click my link to <a href="https://www.google.com/analytics/web/permalink?type=advanced_segment&uid=ZGjjK1l-SvW1ZcO8vehZVQ">add <em>Advanced segment - Mobile</em></a> to your Google Analytics dashboard. If you would rather define it manually (and you should - as you'll probably want to define other advanced segments for your site), then proceed as follows:</p>
<ul>
<li>Go to <em>Standard Reporting -> Advanced Segments</em> and click <em>New Custom Segment</em></li>
<li>In the new form, set <em>Name</em> to <b>Mobile</b>, and parameters to <b>Include, Mobile, Exactly matching, Yes</b></li>
<li>Press <em>Save Segment</em> and you're done.</li>
</ul>
<img height="544" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOrKbMCnalH-Ny5RCsHZEc36Ok8oOS10Q5u8YbgRNtlLVDwFLfT1RK04GU7vnZRtTB7f3L-4NYzSy3a_LbUU961auV5TqAYx6Okp3cUUQCUQXPp3QOdhRqxi3Eujfzqi6nmDKduJ7wNp4/s800/advanced_segments.png" alt="Defining Advanced Segment for Mobile" title="Defining Advanced Segment for Mobile" />
<p>To choose which segments are used for displaying the data, press <em>Advanced Segments</em> again, select and press <em>Apply</em>. <em>All Visitors</em> brings you back to an unfiltered view.</p>
<img height="308" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwswMpwvre0lceb90dlt24ChyphenhyphenP3YtoAJ2uNrPFPuBAKJJrJ1s63luUfMZ6l0IyCCe4LDeAVy2VorgxlNeBDkLuNB2xdzBzkO01r3mCm8dD_89B8wgcy-Dk0rx4n-PjBk5oXmQ8OR70IA8/s800/advanced_segments_disable.png" alt="Choosing active segments" title="Choosing active segments" />
<p>And finally, a screenshot of the <b>Mobile</b> segment in action:</p>
<img height="338" width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZcNuM_8r6fP3S3-MXsE2ZjsyAkUbrdnbJfu1k4PGOAYnw8luoGcNf_SCz5xj2TgBBgZIiY1MtXNMFUWkc5PiRPYn3ZyoC77_sEl9XMhvXbkZdMEAieiKK_XEBO_X1Qc5xMDL_oD1D6gw/s800/advanced_segments_mobile_vs_total.png" alt="Mobile visitors vs. total traffic" title="Mobile visitors vs. total traffic" />skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-12992145416321050082012-04-24T05:46:00.000+01:002012-05-01T13:55:11.527+01:00I want to live forever!<p>There is a concept of <a href="http://en.wikipedia.org/wiki/Gravitational_singularity">singularity</a> in general relativity theory, describing a place where gravitational forces become infinite, and the rules of the universe no longer apply. This area is limited by the events horizon, from which no knowledge of the internal state of the singularity can escape. By analogy, <a href="http://en.wikipedia.org/wiki/Vernor_Vinge">Vernor Vinge</a> in 1982 coined the term <i>technical singularity</i> to describe the moment in the history of technology when the rate of acceleration of future development becomes infinite from the point of view of a bystander. This is based on observation that all knowledge growth is self-propelling, and - as <a href="http://en.wikipedia.org/wiki/The_Singularity_Is_Near">Ray Kurzweil argues</a> - Moore's observation of exponential growth of computation capabilities extends both into the far past and the oncoming future.</p>
<p>Not surprisingly, such topic is a potent source of inspiration for science fiction writers, bringing forth numerous stories. <a href="http://en.wikipedia.org/wiki/List_of_Watchmen_characters#Doctor_Manhattan">Doctor Manhattan</a> from Watchmen, Amber from <a href="http://en.wikipedia.org/wiki/Accelerando_%28novel%29">Accelerando</a> and Adam Zamoyski from <a href="http://en.wikipedia.org/wiki/Perfect_Imperfection">Perfect Imperfection</a> are just a few of my favourite characters, taking positions on the curve of progress that are well beyond human capabilities. However, the singularity seems now close enough that it no longer resides in the realm of pure fiction - well established futurologists place their bets as well, trying to proclaim the date of the breakthrough. Reading through the list of such predictions amassed by Ray Kurzweil, a curious pattern emerges: each of the prophets places the term within his own lifespan, hoping himself to experience the event.</p>
<p>Those bets may not be that far off: just from last year, I recall two large pharmaceutical companies starting clinical trials with yet another batch of medications promising to delay the aging process and to relegate it beyond the hundred years milestone. First journalist comments on the story also mentioned - with outrage - how this would necessitate another extension of the retirement age. Which is a bit ironic, considering the fact that initially the <i>Old Age Pension</i> introduced by <a href="http://en.wikipedia.org/wiki/Otto_von_Bismarck#Old_Age_and_Disability_Insurance_Bill_of_1889">Otto von Bismarck</a> covered workers reaching 70 years of life, which was only a small percentage of the overall workforce at that time. Before you comment with dismay, consider that passing - or even approaching - the technical singularity means a true end to the <a href="http://en.wikipedia.org/wiki/Post-scarcity_economy">scarcity economy</a>. It's a world close to the one shown in <a href="http://en.wikipedia.org/wiki/Limes_inferior">Limes inferior</a>, <a href="http://dukaj.pl/bibliografia/utwory/Crux">Crux</a> or the books of <a href="http://en.wikipedia.org/wiki/Down_and_Out_in_the_Magic_Kingdom">Cory Doctorow</a>: a real welfare state, where every citizen can be provided with almost anything he needs.</p>
<p>Interestingly, Terry Pratchett hid a gem of an idea of how such a society is born in his book <a href="http://en.wikipedia.org/wiki/Strata_%28novel%29">Strata</a>: once a dependable life-prolonging technique is available, anyone earning enough per year to elongate his life at least for the next year becomes effectively immortal. The most amazing - and brutal - events happen at the brink of this revolution, for that truly is the events horizon: beyond the extension threshold, people are on their way to become gods and live forever. Being left behind is one of the most scary things that I can imagine. And unlike the gravitational singularity, this one has a border that permits communication. One-way, mostly, as it's not possible for an ant to understand the giant, but that makes the division even more glaring.</p>
<p>Those that are able to partake in the transition will be, in a way, the last human generation. Oh, surely we will not stop to procreate, but the relation of power between the children and the parents will change dramatically: no longer are they raising a heir, an aid for their old days. As if they are a vampire from old tales - a child becomes a very expensive burden, that only the wealthiest can afford, and a competitor for limited resources. I did mention before that this will be a post-scarcity economy, but still some goods remain in limited supply. A <a href="http://en.wikipedia.org/wiki/Mona_lisa">Mona Lisa</a>, for example.</p>
<p>And if you are lucky enough to be a member of the chosen caste, why wouldn't you desire something as unique? After all, your wealth will be unimaginable, with time unlimited for gathering the spoils, and only so few from your generation to share this gift of time. That's the real meaning of the <i>last generation</i> - for others will too, in future, arise to this plateau of eternal life. But being late to the party, most of them will never have the chance to amass such wealth and power.</p>
<p>I don't claim to know when will the breakthrough come. However, when it does - wouldn't it be terrible to miss it just by a few years? We already know some ways to extend ones life. If I can get ten, even five years more, my chances to participate in the singularity grow.<br/>And so, I run.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com4tag:blogger.com,1999:blog-4782981620644831943.post-24732349520214773632012-03-06T21:07:00.000+00:002012-05-01T13:56:31.874+01:00Converting NAnt build files to MSBuild projects<P><B>TL;DR: I have a NAnt-to-MSBuild converter available at <A HREF="https://github.com/skolima/generate-msbuild">https://github.com/skolima/generate-msbuild</A>.</B></P>
<P>Initially, I envisioned to implement as faithful translation of the build script as possible. However, after examining the idioms of both NAnt and MSBuild scripts I decided that a conversion producing results in accordance with those established patterns is a better choice. Investigating the build process of available projects revealed that converting the invocation of the <CODE>csc</CODE> task is enough to produce a functional Visual Studio solution. Translating tasks such as <CODE>mkdir</CODE>, <CODE>copy</CODE>, <CODE>move</CODE> or <CODE>delete</CODE>, while trivial to perform, would be actually detrimental to the final result. Those tasks are mostly employed in NAnt to prepare the build environment and to implement the “clean” target – the exact same effect is achieved in MSBuild by simply importing the <CODE>Microsoft.CSharp.targets</CODE> file. In a <CODE>.csproj</CODE> project conforming to the conventional file structure, such as is generated by the conversion tool, targets such as “PrepareForBuild” or “Clean” are automatically provided by the toolkit.</P><P>I planned to use the build listener infrastructure to capture the build process as it happends. The listener API of NAnt is not comprehensively documented, but exploring the source code of the project provides examples of its usage. Registering an <CODE>IBuildListener</CODE> reveals some clumsiness that suggest this process has not seen much usage:</P><PRE><CODE>protected override void ExecuteTask()
{
Project.BuildStarted += BuildStarted;
Project.BuildFinished += BuildFinished;
Project.TargetStarted += TargetStarted;
Project.TargetFinished += TargetFinished;
Project.TaskStarted += TaskStarted;
Project.TaskFinished += TaskFinished;
Project.MessageLogged += MessageLogged;
// this ensures we are propagated to child projects
Project.BuildListeners.Add(this);
}</CODE></PRE><P>Last line of this code sample is crucial, as it is a common practice to split the script into multiple files, with a master file performing initial setup and separate per-directory build files, one for each output assembly. This allows shared tasks and properties to be defined once in the master file and inherited by the child scripts. Surprisingly, build listeners registered for events are not passed to the included scripts by default.</P><P>
Practically every operation in the NAnt build process is broadcasted to the project’s listeners, with <CODE>*Started</CODE> events providing opportunity to modify the subject before it is executed and <CODE>*Finished</CODE> events exposing final properties state, along with information on step execution status (success or failure). Upon receiving each message the logger is able to access and modify the current state of the whole project.</P>
<H4>Typical MSBuild use case scenarios</H4><P>I have inspected several available open source projects to establish common MSBuild usage scenarios. I determined that although the build script format allows for deep customization, most users do not take advantage of this, instead relying on Visual Studio to generate the file automatically. One notable exception from this usage pattern is <A HREF="http://nuget.org/">NuGet</A>, which employs MSBuild full capabilities for a custom deployment scenario. However, in order to comply with the limitations that the Visual Studio UI imposes on the script authors, the non-standard code is moved to a separate file and invoked through the <CODE>BeforeBuild</CODE> and <CODE>AfterBuild</CODE> targets.</P><P>Thus, in practice, users employ the convenience of <CODE>.targets</CODE> files “convention over configuration” approach (as mentioned in the <A HREF="http://skolima.blogspot.com/2012/02/build-systems-for-net-framework.html">previous post</A>) and restrict the changes to those that can be performed through the graphical user interface: setting compiler configuration property values; choosing references, source files and resources to be compiled; or extending pre- and post-build targets. When performing incremental conversion, those settings are preserved, so the user does not need to edit the build script manually.</P><P>The only exception to this approach is handling of the list of source files included in the build: it is always replaced with the files used in the recorded NAnt build. I opted for this behavior because it is coherent with what developers do in order to conditionally exclude and include code in the build – instead of decorating <CODE>Item</CODE> nodes with <CODE>Condition</CODE> attributes, they wrap code inside the source files with
<CODE>#if SYMBOL_DEFINED</CODE>/<CODE>#else</CODE>/<CODE>#endif</CODE> preprocessor directives. This technique is employed, for example, in the NAnt build system itself and has been verified to work correctly after conversion. It has the additional benefit of being easily malleable within the Visual Studio – conditional attributes, on the other hand, are not exposed in the UI.</P>
<H4>NAnt converter task</H4><P>Because I meant the conversion tool to be as easy to use for the developer as possible, I have implemented it as a NAnt task. It might be even more convenient if the conversion was available as a command line switch to NAnt, but this would require the user to compile a custom version of NAnt instead of using it as a simple, stand-alone drop-in. To use the current version, you just have to add <CODE><generate-msbuild/></CODE> as the first item in the build file and execute a clean build.</P><P>As I shown in my <A HREF="http://skolima.blogspot.com/2012/02/build-systems-for-net-framework.html">previous post</A>, Microsoft Build project structure is sufficiently similar to NAnt’s syntax that almost verbatim element-to-element translation is possible. However, as the two projects mature and introduce more advanced features (such as functions, in-line scripts and custom tasks), the conversion process becomes more complex. Instead of shallow translation of unevaluated build variables, the converter I designed captures the flow of the build process and maps all known NAnt tasks to appropriate MSBuild items and properties. The task registers itself as a build listener and handles <CODE>TaskFinished</CODE> and <CODE>BuildFinished</CODE> events.</P><P>Upon each successful execution of a <CODE>csc</CODE> task, its properties and sub-items are saved as appropriate MSBuild constructs. When the main project file execution finishes (because a NAnt script may include sub-project files, as is the case with the script NAnt uses to build itself), a solution file is generated which references all the created Microsoft Build project files.</P>
<P>As I mentioned earlier, I initially anticipated that translators would be necessary for numerous existing NAnt tasks. However, after performing successful conversion of NAnt and <A HREF="http://www.cruisecontrolnet.org/projects/ccnet/wiki">CruiseControl.NET</A>, I found out that only a <CODE>csc</CODE> to <CODE>.csproj</CODE> translation is necessary. The converter captures the output file name of the <CODE>csc</CODE> invocation and saves a project file with the same name, replacing the extension (<CODE>.dll</CODE>/<CODE>.exe</CODE>) with <CODE>.csproj</CODE>. If the file already exists then its properties are updated, to the extent possible. In the resulting MSBuild file all variables are expanded and all default values are explicitly declared.</P>
<P>All properties that are in use by the build scripts on which the converter was tested have been verified to be translated properly. Several known items (assembly and project references, source files and embedded resources) are always replaced, but other items are preserved. Properties are set without any <CODE>Condition</CODE> attribute, thus if if the user sets them from the Visual Studio UI, then those more specific values will override the ones copied from the NAnt script.</P><P>I have initially developerd and tested the MSBuild script generator on the Microsoft.NET Framework, but I always plannedfor it to be usable on Mono as well. I quickly found out that Mono had no implementation of the <CODE>Microsoft.Build</CODE> assembly. This is a relatively new assembly, introduced in Microsoft .NET Framework version 4.0. As this new API simplified development of the converter greatly, I decided that instead of re-writing the tool using classes already existing in Mono, I would implement the missing classes myself.</P>
<H4>Mono Project improvements</H4><P>I created a complete implementation of <A HREF="https://github.com/mono/mono/tree/master/mcs/class/Microsoft.Build/Microsoft.Build.Construction"><CODE>Microsoft.Build.Construction</CODE></A> namespace, along with necessary classes and methods from <CODE>Microsoft.Build.Evaluation</CODE> and <CODE>Microsoft.Build.Exceptions</CODE> namespaces. The Construction namespace deals with parsing the raw build file XML data, creating new nodes and saving them to a file. It contains a single class for every valid project file construct, along with several abstract base classes, which encapsulate functionality common to their descendants, e.g. <CODE>ProjectElement</CODE> is able to load and save a simple node, storing information in XML attributes, while <CODE>ProjectElementContainer</CODE> extends it and can also store child sub-nodes.</P><P>While examining the behavior of the Microsoft implementation of those classes strongly suggest that they store the XML in memory, as they are able to save the loaded file without any formatting modifications, the documentation does not require this behavior. As this would bring no additional advantages, and is detrimental to the memory usage, my implementation only stores the parsed representation of the build script. Two exceptions from this are the <CODE>ProjectExtensionsElement</CODE> and <CODE>ProjectCommentElement</CODE>, as they represent nodes that have no syntactic meaning from the MSBuild point of view and it is not possible to parse them in any way – thus the raw XML is kept and saved as-is.</P><P>A project file is parsed using an event-driven parsing model, also known as SAX. This is preferable because of performance reasons – the parser does not backtrack, and there is no need to ever store the whole file in memory. As subsequent nodes are encountered, the parent node checks whether its content constitutes a valid child, and creates an appropriate object.</P>
<P>As is suggested for Mono contributions, the code was created using a test-driven development approach, with NUnit test cases written first, followed by class stubs to allow the code to compile, and finally the actual API was implemented. As the tests’ correctness was first verified by executing them on Microsoft .NET implementation, this method ensures that the code conforms to the expected behavior even in places where the MSDN documentation is vague or incomplete.</P>
<H4>Evaluation in practice</H4><P>After completing the implementation work, I tested the tool using two large open source projects that employ NAnt in their build process: <A HREF="http://boo.codehaus.org/">Boo</A> and <A HREF="http://www.ikvm.net/">IKVM.NET</A>.</P><P>Boo project consists mostly of code written in Boo itself and ships with a custom compiler, NAnt task and <CODE>Boo.Microsoft.Build.targets</CODE> file for MSBuild, so a full conversion would require referencing those additional assemblies and would not provide much value. However, the compiler itself and bootstrapping libraries are written in C#, thus providing a suitable test subject.</P><P>Executing the conversion tool required forcing the build using the 4.0 .NET Framework (instead of 3.5) and disabling the Boo script that the project uses internally to populate MSBuild files. Initial conversion attempt revealed a bug in my implementation, as Boo employs a different layout of NAnt project files than the previously tested projects. Once I fixed the converter to take this into account and generate paths rooted against the <CODE>.csproj</CODE> file location instead of the NAnt <CODE>.build</CODE> file, the tool executed successfully and produced a fully working Visual Studio 2010 project that can be used for building the C# parts of the Boo project.</P><P>Testing using IKVM.NET followed a similar path, as most of the project consists of Java code, which can not be compiled using MSBuild and does not lend itself to conversion. After I successfully managed to perform the daunting task of getting IKVM.NET to compile, the <CODE><generate-msbuild/></CODE> task was executed and produced a correct Visual Studio solution, with no further fixes or manual tweaks necessary. The update functionality also worked as expected, setting build properties copied from NAnt where they were missing from the MSBuild projects.</P>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com11tag:blogger.com,1999:blog-4782981620644831943.post-80836408174684144362012-02-06T20:12:00.000+00:002012-05-01T13:56:17.623+01:00Build systems for the .NET Framework<P>When on 13th of February 2002 Microsoft released the first stable version of the .NET Framework,
the ecosystem lacked an officially supported build platform. However, since early betas were
available short after July 2000 Professional Developers Conference, a native solution – <A HREF="http://nant.sourceforge.net/">NAnt</A> – emerged in August 2001,
three months before the framework itself became officially available. But it was not until
7th of November 2005 that Microsoft presented it’s own tool : <A HREF="http://msdn.microsoft.com/en-us/library/dd393574.aspx ">MSBuild</A>. For two years the competing
systems coexisted in the .NET world, as MSBuild was a new, and relatively unpolished, product. When on 19th
of November 2007 a second version of MSBuild (labeled 3.5, to match the .NET Framework version
it accompanied) was released, it brought multiple improvements that developers have asked for. The community’s focus switched
from NAnt to the Microsoft solution, and NAnt 0.86-beta1, released on 8th of December 2007, was the last release
for almost three years. Although NAnt development started again in April 2010, this long stagnation
has led many of it’s previous users to believe the Open Source solution to be abandoned.</P><P>MSBuild 4.0 offers multiple improvements over NAnt: it ships with packaged Target files for commonly used project types,
in accordance with “convention over configuration” paradigm; it has an ever-growing collection of community Tasks
which perform various commonly executed build operations; it supports parallel builds; it integrates with
Team Build (a <I>Continuous Integration</I> component of Microsoft Team Foundation Server) and other CI systems;
and most importantly, it is used internally by Visual Studio, which presents most build options through a graphical user interface
– developers creating a build project with the help of an IDE may not even be aware that MSBuild is being used underneath.</P><P>Nowadays MSBuild is the <I>de facto</I> standard tool for build automation in the .NET ecosystem. However, multiple
projects still employ a legacy NAnt build system – the main problems preventing migration being complexity of the
existing build infrastructure and supporting <A HREF="http://www.mono-project.com/Main_Page">Mono</A>, which, until 2.4 (released on 8th of December 2009),
lacked an MSBuild implementation. Although the Mono version of MSBuild 3.5 is now relatively complete,
version 4.0 is still virtually non-existent.</P><!--TOC section Existing Solutions-->
<H4>Pre-existing Solutions</H4><!--SEC END --><P>Apart from the two already mentioned build platforms, there are several others. The first of them, dating way back into the Unix times, is called Autotools, officially known as the <A HREF="http://www.gnu.org/s/hello/manual/automake/GNU-Build-System.html">GNU Build System</A>. The core of Autotools – make – was released in 1997. Although this system is widely used by projects developed in <I>C</I> or <I>C++</I>, such as the Mono runtime engine, it has no built-in support for .NET specific compilers, requiring a large amount of custom per-project work by developers. It also has a reputation of being convoluted and unfriendly, although extremely powerful.</P><P>Developers and users of other build system, such as <A HREF="http://www.cmake.org/">CMake</A>, Ant or Maven, had on numerous occasions undertaken efforts to enhance .NET support. Especially Maven community has spawned numerous .NET-targeted clones – <A HREF="http://incubator.apache.org/npanday/">NPanday</A>, <A HREF="http://byldan.codeplex.com/">Byldan</A>, <A HREF="http://incubator.apache.org/nmaven/">NMaven</A> – none of which has gained any traction. The only exception seems to be maven-dotnet-plugin, which delegates the build process back to MSBuild.</P><P>An interesting new tool that is worth mentioning is <A HREF="https://github.com/forki/FAKE">FAKE – F# Make</A>. Although still very much an experimental project, started on 30th of March 2009, this tool is under active development by several contributors. It borrows heavily from ideas explored by <A HREF="http://rake.rubyforge.org/">Rake</A> (written in and for the needs of projects in the ruby language), and allows users to describe the build process configuration in the same language they are using to write their code.</P><!--TOC section Problem Statement and Goals-->
<P>This post looks in depth at three existing build platforms employed on the .NET Framework: NAnt – which used to be the <I>de facto</I> standard, Microsoft Build – the officially supported tool, and FAKE – an interesting build tool employing an entirely different build description paradigm.</P><P>All three tools present the same basic functionality of a build platform: a project file contains tasks, enclosed in targets, which may have specified dependencies upon other targets. During the build process, those targets are first sorted topologically and then tasks within each target are executed in sequence. However, the structure of a project file differs greatly between tools.</P><!--TOC section NAnt-->
<H4>NAnt</H4><!--SEC END --><P><A ID="ch:nant"></A></P><P>When Stefan Bodewig announced first official Ant release on 19th July 2000, the project already had undergone over a year of public development as part of the Tomcat servlet container, and had been used for a year before that as an internal tool at Sun Microsystems (under the name <EM>Another Neat Tool</EM>). In August 2001, Gerry Shaw made a decision to base the new .NET build platform on the existing Ant file syntax (initial code for .NET Beta 1 Ant clone was written by David Buksbaum of Hazware and released under the name of XBuild). Keeping with the open source tradition of self-recursive names, he aptly named this new tool NAnt, from <EM>NAnt is not Ant</EM>.</P><P>After almost ten years of separate development, NAnt’s <CODE>Project.build</CODE> is still difficult to distinguish from Ant’s <CODE>build.xml</CODE> file; the only obvious giveaway being the use of C#’s <CODE>csc</CODE> compiler task instead of Java’s <CODE>javac</CODE>. An absolutely minimal working NAnt build file looks as follows:</P><PRE><CODE><project default="build">
<target name="build">
<csc target="exe" output="Hello.exe">
<sources>
<include name="*.cs" />
</sources>
</csc>
</target>
</project></CODE></PRE><P>This short example contains a single target (<CODE>build</CODE>), which in turn contains a single task, with a simple nested fileset. Executing this file starts the default target, which invokes the <I>csc</I> task to compile the code using the appropriate C# compiler.</P><P>NAnt projects consist of several basic entities: task, types, properties, functions and loggers. Tasks wrap fundamental operations, such as copying a file, performing source control operations or invoking the compiler. Types represent strongly typed parameters, are aware of their content and validate their correctness on creation. A fileset is perhaps the most often used type – it is a lazily evaluated collection of files (the <I>sources</I> element in the example above is a fileset). Properties can be used for storing text values that are used multiple times. They are evaluated in the place of their declaration. Functions, along with operators, can be used in any attribute value, and are evaluated when the attribute is read (usually upon task execution). Loggers are usually employed for reporting build progress to the user through various front-ends, but can also serve for tracking project execution for other purposes. NAnt ships with a large collection of predefined elements, additional ones can be either loaded from external assemblies or defined in-line using a <I>script</I> task. Scripts can be written in any .NET language that has a <CODE>System.CodeDom.Compiler.CodeDomProvider</CODE> available.</P><P>A more advanced example, showing properties, functions and global tasks (not enclosed inside a target):</P><PRE><CODE><project>
<property name="is-mono"
value="${string::contains(framework::get-target-framework(), 'mono')}" />
<property name="runtime-engine"
value="${framework::get-runtime-engine(framework::get-target-framework()) }" />
<echo message="Checking Mono version" if="${is-mono}"/>
<exec program="${runtime-engine}" commandline="-V" if="${is-mono}" />
<echo message="Using non-Mono runtime engine: '${runtime-engine}'"
unless="${is-mono}" />
</project></CODE></PRE><P>Global tasks are always executed in the order they are declared and are used for setting up the project. Functions and properties are evaluated inside <CODE>${}</CODE> blocks, they can be distinguished by the fact that functions use <CODE>::</CODE> to separate the prefix from the function name. Also visible in this example are the <I>if</I> and <I>unless</I> attributes which are available on every task and are used for conditional task execution.</P><P>While NAnt inherited Ant’s mature syntax, along with such brilliant constructs as a distinction between <CODE>*</CODE> (match in current directory) and <CODE>**</CODE> (recursive directory match) for file inclusion/exclusion, it also inherited Ant’s deficiencies. The most glaring one is the inherent single threaded nature of the build process – although the engine itself can be relatively easily extended to invoke targets in <A HREF="https://github.com/skolima/NAnt-new/tree/parallel">parallel</A>, existing build files rely on targets being executed sequentially.</P><!--TOC section Microsoft Build-->
<H4>Microsoft Build</H4><!--SEC END --><P><A ID="ch:microsoft-build"></A></P><P>MSBuild 2.0 (releases are numbered after the Microsoft .NET Framework they accompany, thus the first release is labeled 2.0, second – 3.5 and third – 4.0) was released on the 7th of November, 2005, as part of the Microsoft .NET 2.0 release. It came bundled as the default build tool for Visual Studio 2005. MSBuild’s initial design was similar to NAnt’s, but because at that time company policy forbade Microsoft employees from looking at the implementation of open source solutions (in order to prevent intellectual property violation claims), it does differ in many subtle ways.</P><P>Visual Studio 2005 used Microsoft Build for compiling C# and Visual Basic projects, all other solution types were still handled by the built-in mechanisms inherited from the 2003 release. Before version 2.0 MSBuild was not used internally by Microsoft, but as soon as it had reached the <EM>Release To Manufacturing</EM> stage, intense build process conversion effort has been launched, and by the early November 2005 it was already building about 40% of the Visual Studio project itself. This internal version added support for parallel builds (released to the general audience on 19th November 2007 as 3.5) and compiled all types of projects available in Visual Studio, including <I>Visual C++</I> (this last feature was released on 12th April 2010 as part of 4.0 version). Another important improvement released with Visual Studio 2010 was a graphical debugging tool.</P><P>A minimalistic MSBuild’s <CODE>Project.proj</CODE> looks as follows:
</P><PRE><CODE><Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
DefaultTargets="Build">
<Target Name="Build">
<ItemGroup>
<Compile Include="*.cs" />
</ItemGroup>
<CSC Sources="@(Compile)" OutputAssembly="Hello.exe"/>
</Target>
</Project></CODE></PRE><P>Although a different naming convention is used (uppercase identifiers instead of lowercase), this file shows great similarity to NAnt’s <I>Project.build</I>. It contains a single target, which in turn contains an item group and a task. The namespace definition is required and uses the same schema regardless of the MSBuild version. Executing this file starts the default target, named <I>Build</I>, which calls the <I>CSC</I> task to compile the code. File collections (and item groups in general) can be declared at target or project level, but (unlike NAnt) cannot be nested inside tasks (some tasks allow for embedding item groups and property groups, but this is rare behavior). Prior to version 4.0, items could not be modified once declared.</P><P>Despite being a valid MSBuild file, the above example would not be recognized by Visual Studio (and by most .NET developers). Instead of requiring the user to describe the whole build process verbosely, MSBuild offers <CODE>.target</CODE> files which allow “convention over configuration” approach to build process : user only specifies those settings and actions that differ from the default ones. MSBuild projects use <CODE>.proj</CODE> extension for generic build scripts, and language-specific extensions are used for files importing specific <CODE>.target</CODE>s (for example <CODE>.csproj</CODE> for C# projects). Thus, a minimal <CODE>Project.csproj</CODE> might be written as:</P><PRE><CODE><Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Compile Include="*.cs" />
</ItemGroup>
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
</Project></CODE></PRE><P>By replacing an explicit invocation of the <I>CSC</I> task with an <I>Import</I> directive, this file inherits the whole build pipeline defined for Visual Studio, including automatic dependency tracking (should one declare <I>Reference</I> items), graphical user interface for configuring the build, targets for cleaning and rebuilding the assembly, and standardized extension points.</P><P>Basic entities in a MSBuild project are properties, items and tasks. Properties represent simple values. Items are untyped key-value collections, mostly used to represent files. Both types are evaluated as soon as they are encountered. They must be wrapped in groups, but this only allows them to share a <I>Condition</I>: properties cannot be bundled and items are always grouped by name (in the example above the <I>ItemGroup</I> generates items named <I>Compile</I>, one for each matching file). MSBuild has a mechanism named <I>batching</I> that splits items sharing a name according to a specified metadata value – when this is used, a task defined once will be executed for each batch of items separately. Item definitions allow setting default item metadata values. MSBuild, like NAnt, distinguishes between <CODE>*</CODE> (match inside current folder) and <CODE>**</CODE> (recursive directory match). Loggers can be used for tracking project execution, but they must be attached from command line. There is a quite extensive task collection available out of the box, many of them are direct replacements for NAnt tasks. Since 4.0 it is also possible to define a task in-line with the help of <A HREF="http://msdn.microsoft.com/en-us/library/t41tzex2.aspx"><I>UsingTask</I></A>.</P><P>An example of using functions for evaluating task conditions (this example does not work as of Mono 2.10 because functions are still not implemented):</P><PRE><CODE><Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
InitialTargets="Info">
<PropertyGroup>
<IsMono>$(MSBuildBinPath.Contains('mono'))</IsMono>
<RuntimeEngine>$(MSBuildBinPath)/../../../bin/mono</RuntimeEngine>
</PropertyGroup>
<Target Name="Info">
<Message Text="Checking Mono version" Condition="$(IsMono)"/>
<Exec Command="$(RuntimeEngine) -V" Condition="$(IsMono)"/>
<Message Text="Using non-Mono runtime engine: '$(MSBuildBinPath)'"
Condition="!$(IsMono)"/>
</Target>
</Project></CODE></PRE><P>Property values and functions are evaluated inside <CODE>$()</CODE> blocks, basic operators (such as <CODE>==</CODE>) are also recognized outside of those markers. <CODE>@()</CODE> syntax is used for referencing collections of items and <CODE>%()</CODE> triggers the batching mode using item metadata. MSBuild 4.0 is keeping track of the actual underlying type of each property value and is able to invoke any .NET instance methods defined on such an object – however, because of security concerns, only methods marked as safe (number/date/string/version manipulation and file system read-only access) are available in scripts (this security mechanism can be disabled by setting the environment variable <CODE>MSBUILDENABLEALLPROPERTYFUNCTIONS</CODE> to 1). The syntax for method invocation comes from PowerShell – instance methods are called with a simple <CODE>Value.Method()</CODE>, while static methods can be invoked with <CODE>[Full.Type.Name]::Method()</CODE>.</P><P>The MSBuild syntax draws heavily from NAnt and should feel quite familiar for any developer once one grasps how items differ from NAnt’s strongly typed collections. The tool is under active development, has extensive support from both Microsoft and the community, and – since Mono 2.4 was released on 8th of December 2009 – is usable as a cross-platform build system.</P><!--TOC section FAKE-->
<H4>FAKE</H4><!--SEC END --><P>Fake was published by Steffen Forkmann on the 1st of April, 2009. His goal was to create a build platform using the same language he wrote his programs in – F# (this trend is also observed in Ruby, Python and other languages which allow executable domain-specific languages to be defined at the language level). Three years later, Fake still remains more of an academic exercise than a widely deployed tool, but it does explore a very interesting approach to build management. Fake executes its scripts through the F# interpreter, extending the syntax of the language with three simple additions:
defining build steps (<CODE>Target? TargetName</CODE>),
declaring coupling between targets (<CODE>For? TargetName <- Dependency? AnotherTargetName</CODE>) and
specification of default targets (<CODE>Run? TargetName</CODE>). </P><P>A basic <CODE>build.fsx</CODE> might look as shown in the listing below:
</P><PRE><CODE>#I @"tools\FAKE"
#r "FakeLib.dll"
<B>open</B> Fake
Target? Default <-
<B>fun</B> _ ->
<B>let</B> appReferences = !+ @"**.csproj" |> Scan
<B>let</B> apps = MSBuildRelease @".\build\" "Build" appReferences
Log "AppBuild-Output: " apps
Run? Default</CODE></PRE><P>First three lines import the <CODE>Fake</CODE> namespace from the <CODE>FakeLib.dll</CODE> file in directory <CODE>tools\FAKE</CODE>. Following them is a target definition, with a fileset wildcard match pipelined (using F#’s <CODE>|></CODE> operator) to the Scan function, then a <CODE>MSBuildRelease</CODE> task invocation, log output, and finally – declaration of the default target. It should be noted here that Fake does not have built-in tasks for compiling code – it relies on the presence of MSBuild instead. There is also no need for a special in-line task definition syntax, as arbitrary F# code can be embedded anywhere in the script. This can be seen in the following example:</P><PRE><CODE>#I @"tools\FAKE"
#r "FakeLib.dll"
<B>open</B> Fake
<B>open</B> System
<B>let</B> isMono = Type.GetType ("Mono.Runtime") <> null
<B>let</B> stringType = Type.GetType ("System.String")
<B>let</B> corlibLocation = IO.GetDirectoryName (stringType.Assembly.Location)
<B>let</B> notMono = String.Format("Using non-Mono runtime engine: {0}", corlibLocation)
Target? Info <-
<B>fun</B> _ ->
<B>if</B> isMono <B>then</B>
trace "Running on Mono"
<B>else</B>
trace notMono
Run? Info</CODE></PRE><P>The keyword <CODE>let</CODE> declares a F# variable, which is equivalent to property declarations used by NAnt and MSBuild. However, unlike those two tools, Fake allows the developer to invoke any .NET method, without security contraints.</P><P>As an experimental project, Fake does have some shortcomings. It does not execute its targets in parallel, although the code inside them can be easily parallelized. It also does not keep track of a target’s outputs up-to-date state, executing the target commands during every project rebuild, which makes it unsuitable for large projects. There is no support for using Fake under operating system other than Windows. And the F# language itself still remains exotic to most .NET developers, making the build scripts hard to understand and maintain.skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com3tag:blogger.com,1999:blog-4782981620644831943.post-58019151146179314462012-01-18T20:21:00.001+00:002012-01-19T16:50:57.392+00:00Upgrading from CruiseControl.NET 1.5 to 1.6 / 1.7<p>I've finally decided to update the version of <a href="http://www.cruisecontrolnet.org/projects/ccnet/wiki">CruiseControl.NET</a> I use, going from 1.5 straight to 1.7 nightly build. My previous attempt ended with cryptic error messages, but, as this time the build server was already having some problems and required maintenance, I went through with the update (<b>after</b> fixing the problems first, of course).</p><p>Most important thing, if you don't already know it: there's a validator included in the downloadable package, which you can use to check how the server will interpret your pretty configuration files spaghetti. If you are making use of the <a href="http://www.cruisecontrolnet.org/projects/ccnet/wiki/Configuration_Preprocessor">pre-processor</a> feature - the validator is indispensable. A neat trick while using it is copying the output (processed) configuration, changing the input files and copying the new output to a separate file, then running diff on those two to check whether the actual change you just introduced is what you were intending to do. In my case - I was checking whether I got exactly the <b>same</b> output while using a two-years-newer release by running my original configuration through the 1.5 validator and trying to get identical results from the 1.7 parser.<br />The initial result you'll get will most likely be this:</p><pre><code>Unused node detected: xmlns:cb="urn:ccnet.config.builder"</code></pre><p>Oh. Not good. <a href="http://stackoverflow.com/q/2843506/3205">StackOverflow</a> has an answer that claims to fix this problem, only to result in this:</p><pre><code>Unused node detected: xmlns="http://thoughtworks.org/ccnet/1/5"</code></pre><p>Well - not exactly a change for the better. What is the problem? The changes in the configuration parser made it a bit more picky about the files it accepts. Now they have to start with the XML preamble and include the version information (1.5 or 1.6, there's no 1.7 schema yet). The required beginning of the main configuration file is now as follows:</p><pre><code><?xml version="1.0" encoding="utf-8"?><br /><cruisecontrol xmlns:cb="urn:ccnet.config.builder"<br />xmlns="http://thoughtworks.org/ccnet/1/5"></code></pre><p>Also, while 1.5 allowed you to include files containing a "naked" node (e.g. to reuse svn version control configuration), 1.6 requires the top level node in the included file to be either a <code><cb:config-template></code> or <code><cb:scope></code>. Thus, to be on the safe side, start each of your configuration sub-files with the following:</p><pre><code><?xml version="1.0" encoding="utf-8"?><br /><cb:config-template xmlns:cb="urn:ccnet.config.builder"<br />xmlns="http://thoughtworks.org/ccnet/1/5"></code></pre><p>With those changes in place, my configuration file results in the same pre-processor output both in CruiseControl.NET 1.5 and 1.7.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com0tag:blogger.com,1999:blog-4782981620644831943.post-8104319490975785232009-05-08T16:24:00.006+01:002012-01-19T16:50:57.443+00:00D'oh!<p>I just spent two hours blaming <a href="ccnet.thoughtworks.com">CruiseControl.Net</a> release candidate for a bug, which turned out to be a trailing <code>\</code> in my configuration.<p/><br/><p>So remember, kids: <code>nant -D:publishroot="E:\PublicBuilds\" publishbuild</code> will invoke nant with the default target. To make it work as expected, one has to use <code>nant -D:publishroot="E:\PublicBuilds\\" publishbuild</code>.</p>skolimahttp://www.blogger.com/profile/13638993878949515686noreply@blogger.com2