Some thoughts on Arch's way forward.

I was reading the thread on messed up /opt. when I came accross this article:
http://www.osnews.com/story.php?news_id=8761
I have read a few articles which have critisised either arch (quoted elsewhere, MS critisising Linux) and in this one in Arch v Slackware.
One of the point is that there is paid for support for Slackware but not for Arch. I already do install (for the company I work for) Arch for money, and support it afterwards, so there is paid for support for Arch.
Question is though, should there be "official paid support" for Arch, if so:
1.  Who decides who can do paid support?
2.  What could be the criteria for it?
I also noted that Arch is supposed to be less stable on the server than Slackware, though I have never had a problem with Arch on the server as I only install what is needed and it runs fine.
Any way, anyone any thoughts on paid for support?

I see that is now two people who think I am the spawn of Satan!  :twisted:
tomk wrote:
Official support would be a bad idea, IMO. As you say, Benedict, many businesses like to pay for support, because in doing so, they gain entitlement to the rights and protections afforded to paying customers. In negotiating their contracts or support agreements, they can request SLAs, service guarantees, and related penalties to ensure that these are enforced.
Well, yes. The company I work for already does this for Novell, Microsoft and Linux. On the Linux front it is a minority of Suse box's, quite a few IPCop ones and quite allot of Arch. The businesses don't actually care what version of Linux, in fact many don't care if it is Linux, they just expect it to do what we say it does. We do that in exchange for cash.
A support provider who enters into such an agreement becomes bound by it, immediately limiting their freedom and flexibility. I would hate to see this happen to Arch.
Well, what support we provide for any of the platforms we support does not bind the vendor who provides the platform. So I don't see this as a problem.
Mind you Judd and co should be making money out of it as well.
Dusty wrote:
Interesting idea, interesting discussion....
I think it would be great if some company offered additional Arch support, but I think it would need a lot more funding than the current Arch project has. So third party support, with the company providing the support offering huge donations to Arch Linux would be the best case scenario.
Thanks for saying it was an interesting idea. 
Well, we don't sell "Arch" but we do install systems which run on Arch. (Some at any rate) and this will probably expand. Being a for profit organisation we do have the resource to do this, especially for companies who are geographically close to us.
phrakture wrote:
I am now providing un-official arch support - if you want me to post in your thread, you must send me $1.99 (paypal, visa, mastercard, and cashier's check only) - then I will respond... this will be my last free post...
Well, the company I work for already charges for paid for support. That does not stop me provided free support. Mind you I am unlikely to fly over and stay for a week and fix something you broke for free.

Similar Messages

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Is Creator + JSF the way forward ?

    Hi
    First post to forum ...
    Is Creator + JSF the way forward, or would you use some else for a new project ?
    I have developed a number of small sized web apps. Typically < 20 jsps + supporting classes for db access, all using a simple hard coded MVC servlet scheme.
    However I have become accutely aware that while this is fine for small stuff, it does not provide any short cuts for saving development time and probably does not scale all that well either.
    Hence I am looking for a single technology to speed up design/coding, and I want to move up a level in terms of professional best practice for java web app development.
    Two attractive technologies appear to be Struts and JSF. My current preference is for JSF over Struts, almost entirely because Creator "looks" like such a good IDE for rapid devel. Is this a good enough reason for going with JSF, or should I look again at Struts or another IDE.
    Also I have noticed that Creator on my win pc is "dog slow" no doubt it needs more memory. But this sort of issue concerns me because I am left wondering what other problems I will run into only after I have spent valuable time on the project. Will I be left thinking that choosing JSF because of Creator was a not such a good idea.
    Finally my next app will a web based custom cms system. I would very much appreciate all of your thoughts based on past experiences on which way I should go for time saving java web app devel. JSF, Struts, Creator, JDev...
    Many thanks
    Paul

    I share your concerns. I'm also new at enterprise java development. I've found, however, that Creator has been an excellent tool for cutting my learning curve in half. It helped me to better understand the relationships between the various java components.
    Having said that, however, I (and others I know) have experienced the frustration with the slow responses and code generation. I presume this is because the IDE may also be written in Java rather than C. It does NOT affect its run time environment when your application is deployed, however. Your application will run great when deployed to a "real" server.
    My only other problem has to do with the jdbc drivers. Some if not all of Creators databinding relies on jdbc method calls. Where I work we use a Sybase database and while Sun and BEA have perfectly fine Sybase jdbc drivers, the folks I work for insist that we use Jconnect and the jdbc Sybase provides. This jdbc is lacking in a number of methods and, when forced to use it, ALL your databinding goes out the window. This means I have to spend just as much time, if not more, writing my own code to populate the Creator objects (specifically table objects) or learning and implementing more tools like Hibernate or whatever. I'm trying to make my life SIMPLER, y'knowwhatamean?
    So while the tool is great, well integrated, well thought out, etc., be carefull of your actual deployment environment.

  • EXCEL - SAP integration ... which way forward?

    Hi folks,
    A customer (having a SAP 4.6D system) would like to have an EXCEL sheet being populated with SAP data with some additional formatting (headers, subheading, totals, colouring cells based on certain values; etc ...) within the EXCEL sheet ...
    Out of my head I could think of some possibities here:
    - ALV view of table with download (but no extra formatting options here ... )
    - ABAP OLE commands (not sure this is still being used)
    - EXCEL integration via BDS ( I remember that you can have a both way integration here but how far does it go?)
    - RFC communication to for example .NET RFC server via .NET connector ..
    Anyone having any experience in this?  What are the pro's and cons of any of the possibilies?  Anyone any suggestion on what is the best way forward?
    My customer is looking for a 'cheap' fast solution and is not willing to spend more then 2-3 days on it ...
    Thx,
    Steven

    Hello Steven 
    I prefere ALV Grid in combination with ALV templates stored in SAP (avaliable from 46c).
    I don't know all the methods from the other replys, but my experience is that OLE and use of excel macros with VBA script is a bad idea. Some years ago i made an interface like that based on (as far as i remember) excel 95 and R/3 31i. After upgrade to R/3 46c I had to change the ABAP program to make it work again. Two month later we changed to office 2000 and after that the macros didn't work anymore - I never fixed it - I changed to ALV grid.
    With the combination of ALV grid and excel templates you solve the problems with the tools that is best. You use ABAP to extract the data and you use excel for the formatting.
    I a privieus posting I listede som hints about this technique. I don't know how to link to an other posting, so I just cut/paste:
    1) An ABAP program extracts data from the database and presents the data with Excel Inplace.
    2) Instead of the SAP standard templates SAP_OM.XLS and SAP_MM.XLS the program uses a customised excel template created by the "business people" themselves. (Now they can't blame you for the layout).
    Where to find more info:
    Oss note 358644 and 548409
    You will have to build your own templates by creating modified copies of the SAP standard templates SAP_OM.XLS and SAP_MM.XLS. If you can find these see note 316728 to copy them from client 000.
    You will have to work with transaction OAOR to administrate the templates and program BC_BDS_UPLOAD to upload new templates.
    You will also find information in the ALV grid documentation on help.sap.com.
    Best regards
    Thomas Madsen Nielsen

  • Just some thoughts

    Just some thoughts.
    Adobe says that piracy is one of the reasons for going to the cloud.  CC has to verify the subscription periodically why not add that process to the perpetual license for a nominal yearly fee.  On the other hand, why can't a user subscribe to CC for a month to get the latest version and then go on the web to get a crack?
    I also own, unless they go to a subscription, Elements 10.  Lots of functions that CS6 has for photo enhancements.  I found some plugins that really enhance Elements. e.g. ElementsXXL adds new menu items, icons, buttons, key shortcuts and dialogs, so they seamlessly integrate into the user interface of Photoshop Elements. ElementsXXL bridges the gap between Photoshop Elements and Photoshop and greatly enhances the image editing experience in Photoshop Elements.
    I am sure other plugins will appear to make Elements more like Photoshop.

    "Adobe says that piracy is one of the reasons for going to the cloud. CC has to verify the subscription periodically why not add that process to the perpetual license for a nominal yearly fee."
    Why on earth would anyone want to pay a "nominal yearly fee" for another nuisance created by Adobe.  They can already shut down perpetual license holder's software if they pull the plug on the activation servers.  Don't you know that their servers check our software now as it is?
    Just recently with the release of the CS2 software to previous owners, many people could no longer use their already-installed CS2 software because Adobe's CS2 servers were shut down.  What does that tell you?
    I'm not going the way of cruising the internet for cracks.  That's for disillusioned kids who think everything should be free.  When my CS6 software no longer works I'm going with another company.  Probably Corel for some of the programs.  CorelDRAW was always better than Illustrator anyway.  Industry standard doesn't always mean it's great.  It just means everyone is using it and they expect you to do as well.  Personally, I think Macromedia made better software before Adobe took them over and MM software was a lot cheaper.

  • Discussions: Some Thoughts From Abroad

    Hi all,
    I would like to add some thoughts on the points debate. I think we all recogonise that the majority of users of Apple Discussions are in the USA, I don't have a problem with that, it's a fact of life. However, the new points system does have a bias towards those replying from the same country as the OP due to time zone differences.
    I don't ever see the day when I can imbibe in the Lounge with the 4s and 5s. By the time I get to my computer each day, all the questions I am able to answer have already been answered unless there are a few insomniacs around, or I become one myself!
    I liked the old system of awarding points. Not just because I was able to get a few myself, but mainly because I was able to give recognition to someone who had helped me expand my knowledge of Macs, even when I was not the OP or after the question had been answered. I realise that this system was open to abuse, but only by a few surely. Limiting the amount of votes one could award each day according to your Level would address that.
    In closing I would just like to say a big thankyou to all who keep these Discussions going. Your work is appreciated.
    Cheers
    NikkiG

    737/3183
    Hi NikkiG,
    "I liked the old system of awarding points. Not just because I was able to get a few myself, but mainly because I was able to give recognition to someone who had helped me expand my knowledge of Macs, even when I was not the OP or after the question had been answered. I realise that this system was open to abuse, but only by a few surely. Limiting the amount of votes one could award each day according to your Level would address that."
    And this possibility of awarding others was shaping a very different spirit:
    A lot of tips could be found among "side-replies" that were on topic but coming later, after the OP was already satisfied with the urgent fix.
    It was also shaping a spirit of sharing knowledge in a deeper, more elaborate way. To me this was really about a community of people who shared this kind of enthusiasm for good knowledge, for sense and for general improvement.
    Now that we live in immediateness, the "game" is much more about Shooting First, the one short and "simplified" answer that would, yes, quick-fix, but also kill most of the "side-discussion" as considered almost off-topic...
    "By the time I get to my computer each day, all the questions I am able to answer have already been answered"
    Well, the general feeling, anyway, no matter what time is it in Australia, is more like
    "By the time I'm done reading the question as it appears on top of a forum, one or two "Link-to-TheXLab"-type of quick answer have already been copy-pasted within the same minute"!
    (not only Donald but some others who are better at self-censorship, who manage equally well in the new system or who don't speak much because they don't know enough about true reasons/motivations, have left a word or two here, in their own way, about your feeling)
    "A bias towards those replying from the same country as the OP due to time zone differences"
    I see what you mean,
    although for me European at the contrary there is not anymore this period of disabling the "20% chances" thing that I was feeling in the Old Discussions
    you still can give quick replies to some newbies from Japan/New Zealand/Australia, but I agree there may not be as many as those from the US...
    Your suggestions:
    I second and support them, but I'm not sure, now that the system works again smoothly with no slowing down
    - Thanks so much Mods!! -
    whether they are very high anymore on the list of priorities?
    This depends also on the "political" point of viewing them. If with Tuttle we consider that "many of the new features mainly enhance the forums' usefulness to Apple"
    (my main personal guess about this aspect, would be about the gain in Phone Technical Support Crew management)
    possibly there will be no more changes about this?
    Possibly even more immediateness in the future,
    after all we are seriously entering the "iPod world" now, although let's hope the good old times of computing being a matter of a few Cupertino geniuses and a bunch of truly enthusiastic pure Mac people, is not too far gone now?
    Let me finish with your words, as they are exactly those I'd have liked to write myself:
    In closing I would just like to say a big thankyou to all who keep these Discussions going. Your work is appreciated.
    Axl

  • I have written a binary file with a specific header format in LABVIEW 8.6 and tried to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?

    I have written a binary file with a specific header format in LABVIEW 8.6 and tried  to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?

    I can think of two possible stumbling blocks:
    What are your 8.6 options for "byte order" and "prepend array or string size"?
    Overall, many file IO functions have changed with LabVIEW 8.0, so there might not be an exact 1:1 code conversion. You might need to make some modifications. For example, in 7.1, you should use "write file", the "binary file VIs" are special purpose (I16 or SGL). What is your data type?
    LabVIEW Champion . Do more with less code and in less time .

  • HT4059 I currently use ibooks on my iphone 4s... I want to be able to send several pdf documents at the same time in one email, I thought there was a way to do this?  Can you help please?

    I currently use ibooks on my iphone 4s... I want to be able to send several documents at the same time in one email, I thought there was a way to do this?  Can you please help?

    Install ClamXav and run a scan with that. It should pick up any trojans.   
    17" 2.2GHz i7 Quad-Core MacBook Pro  8G RAM  750G HD + OCZ Vertex 3 SSD Boot HD 
    Got problems with your Apple iDevice-like iPhone, iPad or iPod touch? Try Troubleshooting 101

  • 24 hours so far downloading Lion and no end in sight. Is this really the way forward?

    Here in the UK a lot of us have fairly pathetic download speeds, so I ask Apple is this really the way forward?

    Same for me - started 20.July about 7pm in germany with VDSL capable of 2.5MB/s
    Download Started well soon but dropped to about 300KB/s. Stopped and went to bed at about 10pm.
    Resumed at 21. July at 7am - between 100KB/s and 300 KB/s for about 90min. Stopped and got to work.
    Resumed now at about 2:30p.m. lying between 50KB/s and 100KB/s :-(

  • Have IPhone 4s, I deleted some pictures is there any way I can get them back in my phone?

    Have iPhone 4s, I deleted some pictures, is there any way I can get them back on my iPhone ? Thanks

    If the photos were from Camera Roll, then they are saved on your latest iPhone backup.  You could restore the backup from iTunes (or iCloud).

  • Best way forward?

    I'm currently running a (very old I believe) version of Oracle Application Server that has PHP version 4.3.9 built in. I'm certain that this was a version that was compiled by Oracle and then dumped on a CD for installation.
    I'm trying to extend it, but reading the forums it would appear that the version I'm using isn't extendible (see here for related post: Extending Oracle AS suppled PHP
    Can anybody confirm this, and also advise if Zend Core for Oracle is the best way forward from here. I'm running SunOS (Solaris 10) and Oracle 10g.
    Regards
    Rich

    Zend Core is a good way forward. Alternatively you may want to compile PHP yourself.
    -- cj

  • Nokia X/X2 -- Some thoughts and questions

    Moved from Symbian Belle, some thoughts:
    1. Email widget doesn't automatically refresh. Have to manually tap to load new mail despite specifying a sync. interval.
    2. Asking me to Sign in to a MS, Exchange, Nokia or Google account etc. to use the calendar. Can I not get a local one for which I can choose to Sync if desired, similar to Symbian?
    3. Does the X2 have a play via radio capability? I don't see the app but wanted to if the hardware capability is there?
    4. How do I know the phone is using Gps, Glonass or Beido? Specifications listed positioning capabilities.
    5. Don't see the mix radio app. Possibly not available in certain countries.
    6. Belle had option to automatically switch between GSM and 3G where available. X2 only allows a 3G only setting and GSM/CDMA.
    7. People and Phone apps need to be bundled together. While scrolling through contacts via phone app, if you accidently tap one, it starts dialing. Plus it doesn't show items as together and repeats contacts with more than one number.
    8. Didn't really like the Metro and Fastlane UIs. One reason was that the wallpaper was not changeable. So if you have gaps, then all you see is black.
    9. Glance screen doesn't seem customizable. Time appears on 2 lines, seems a bit odd.

    Reply to 9:
    Glance Screen on X v2 is almost amazing. http://developer.nokia.com/resources/library/nokia-x-ui/essentials.html
    A past Nokia X user, now a X user.

  • N900 Is it the way forward?

    I've just recieved the N900 (UK on Vodafone) but straight away i've come across and error which means it's going back for a replacement handset - although Vodafone currently struggling to have the N900 in stock!  The error is "internal error closing app".  My concern now is reading lots of blogs i'm currently concerned that the N900 is not yet ready for the market and perhaps should be looking for something else.
    Is the phone the way forward?

    I've had my N900 for just over a month now and I would gladly recomend it to anyone. The comunity that has developed with maemo has moved onto the maemo 5 operating system and with them comes years of knowledge of the system, greater than any phone/computer manufacturer would be able to provide to the general user.
        As a result of the open source nature of the device i can download apps like 'comix' for free rather than paying nearly a tenner for one that comes with much less useability on the iphone. The full web browser also entitles me to get last fm on the go for nothing, rather than having to create a permium account for the right to listen on the go.
       The largely scientifc user base for linux means that rather than a large selection of apps that are just for recreational use, alot of useful programs get ported. Graphical calculators, Periodic tables & Planispheres are just a few that I have used greatfuly.
        The file management system is also very user friendly, alowing me to create folders and move items with ease, even though i only have one button to do it with.
        So to anyone thinking of getting this phone I would say do it. As a phone it works better than any other phone out there, because it has skype & google talk built in and is always connected. And what is a phone except a device for people to contact you on. As a internet tablet it is getting there. It can be used with ease to find out anything on the web, and I have yet to come across a web page that dose not load as it loads on my home pc, if a few seconds slower. The firefox web browser also comes with add ons that can gather location specific information (e.g. coffee houses and petrol).As a portable gameing machine, well take your pick from the entire back catalog of nes, snes and dos games. As a hacker tablet it already has software to crack wireless encription available to everyone, but you have to be comfortable with xterminal and everything else a hacker should know to use it.
        All this and it has only been on general sale in the uk for just over a month. Sorry if i come off as a fan boy, but I can't see what comes next in functionality. Nokia have got it so right with this device that i can see it sticking around for a long time, even if the os changes.

  • Way forward? Multicast chans & catch up not availa...

    Well fine since taking them up including Tuesday but on Wednesday when I went to watch the Prem Lge footy the whole lot of multicast chans (and catch up) had collapsed SD & HD.
    After trying all the usual things I phoned support and a very diligent and polite gentleman spent a fair amount of time trying all I had (didn't like to interrupt him) with the same zero result. He has promised to escalate the problem and I should receive a phone call this evening .......... at this point I started to feel that deja vous feeling.
    So what is it with this IPC6023 fail mode and what is the way forward...... anyone?????
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings Star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

    Hi TrickyDicky,
    IPC6023 is caused by two possible sceanarios.
      1) No multicast packets received for 5 seconds
      2) More than 5 packets per second lost (Message can be disabled using "Picture Quality Alert" setting)
    I'm guesing you are getting the error in relation to sceanario 1.
    Does your youview box have Internet connectivity direct from the HomeHub? Can you play On Demand content / iPlayer etc?
    Steve
    BT TV Expert
    I am a BT Employee and an expert on TV queries. I am here volunteering my own time to give advice, primarily on the BT Vision+ and YouView boxes. Go here for more info.
    If my post has been helpful, please click on the Ratings star on the left-hand side of the post.

  • Everytime i try to open creative cloud installer i get a error code 22 i have tried turning off my firewall and no success. what is the best way forward?

    everytime i try to open creative cloud installer i get a error code 22 i have tried turning off my firewall and no success. what is the best way forward?

    is that the case-sensitive drive error?  Error "Case-sensitive drives not supported" or similar install error | Mac OS

Maybe you are looking for

  • Using javamail with dial-up connection and LAN

    Hello, I'm using javamail on Windows XP (including service pack 2). When I connect to a mail server on the network, everything is OK. When I connect to a mail server using a dial-up connection, everything is OK. But when I try to connect to a mail se

  • Free Space and 10.6.7 update

    I lost about 30gb of free space when I updated to 10.6.7. Any ideas?

  • Form not loading in ipad

    I can preview the form in PDF as well as in HTML5 in my laptop, when i try to deploy the same in ipad and try to open its not loading in ipad.It says loading and never loads

  • PC does not boot from MSI DR4-A DVD drive

    Hi, I've just bought a NetVista P4 1.6-256MB-40GB PC with an IBM branded LG CDROM. Booted fine from the CDROM drive. I've replaced it with an MSI DR4-A DVD+/-RW drive, but I am unable to boot from it. It is the only drive on the 2nd IDE flatcable, an

  • IPhone with iOS 7 is detected as workstation/laptop

    Hello, I just updated my iPhone to the newest operating system (iOS 7), which is completely revamped to be much more advanced than ever before. Apparently, with the new operating system, my Network Magic application seems to identify it as a computer