Perhaps a higher level of recognition?

This is a serious suggestion.
This thread is an instance of a question being posed, and then being answered by the original poster.
http://discussions.apple.com/message.jspa?messageID=8647956#8647956
Not remarkable in itself, but this one is. A number of others have posted since, with words like, You truly are a lifesaver!!!! I saved days of frustration by reading your solution to this problem!!! I cannot thank you enough!!!!

Bob Lang1 wrote:
Or perhaps the points system should be discarded completely?
I don't think this is a good idea. In a perfect world, each suggestion would stand on its own merits & a points system would be of no value, but in this less-than-perfect one it is useful to know if someone making a suggestion has a track record of successfully helping others or not. A good track record is no guarantee that a particular suggestion is a good or even benign one but it does offer some guidance, particularly when a suggestion that has worked for some may have other consequences that 'top users' are aware of but others are not.
For example, the suggestion in the referenced topic is to remove all the non-folder items from ~/Library/Mail. If one does that, any rules & quite a few other settings are removed, which might be considered very undesirable by some users looking for a good way to force the reimport of mailboxes.
In general, users with a lot of points to their credit have learned, sometimes the hard way, of the wisdom of the rule, "First, do no harm."

Similar Messages

  • Errors in the high-level relational engine on Schedule Refresh Correlation ID: 7b159044-c719-41f9-8d0f-da6f73576d6e

    Connections are all valid and work when I setup the Refresh but when the schedule refresh occurs I get this error:
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The remote server returned
    an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote server returned an error:
    (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed.
    It is trying to refresh 3 simple tables with less than 9,000 rows each.
    Also, i'd like to add that the refresh works right from excel as well...
    Another fact just in, it seems to work on one out of 3 tables sometimes, so first table gets a success on the log, but sometimes it fails (It succeed twice and failed once with the above error).  The second table never succeeds and gets the error above. 
    The 3rd table never even gets attempted.
    Am I running into some sort of timeout perhaps?
    loading
    Failure
    Correlation ID:
    7b159044-c719-41f9-8d0f-da6f73576d6e
    04/01/2015
    at 01:50 AM
    04/01/2015
    at 01:53 AM
     00:03:14
      Power Query - Sendout_Records Not tried
      Power Query - Positions Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The
    remote server returned an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote
    server returned an error: (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed. 
      Power Query - Position_Activities Success.

    This is not because of the number of rows, instead its the execution time. The query takes more than 7 mins to execute and seems this fails the refresh process.
    Thank You

  • High-level interrupt handler

    Why can I decide to support a high-level interrupt or not? Under what condition does the Solaris kernel will map my hw interrupt (INTA from PCI bus) to a high-level interrupt? When should I refuse to support a high-level interrupt? Why? Can I force my hw interrupt to be a high- level interrupt?
    Also think about that, most hw interrupts indicate something important such as the case buffers are full. If they are assigned below the scheduler's, it really does not make sense.
    Is it possible to block any hw interrupts? Or I'd put it this way can I prioritize hw interrupts in Solaris?
    Thanks
    tyh

    Hi,
    On x86 each IRQ has a software priority assigned to it implicitly by the bus driver, although I think you could override it in the driver.conf. Unlike SPARC, the processor doesn't support a PIL so software priorities are implemented by masking all lower-priority IRQs and re-enabling interrupts.
    High priority interrupts, above dispatcher level, run in the context of the current thread on the cpu, normal level interrupts are handled by interrupt threads.
    The interrupt threads are the highest priority threads on the system, so will preempt any other running threads. In addition mutexes in Solaris use priority inheritance, so the interrupt threads will get to run.
    In general, high level interrupts are allocated to devices with small buffers such as serial or floppy, so that their buffers get serviced in the fastest possible time. Others can afford to wait for just a bit.
    Your driver should check to see if its device has been allocated a high level interrupt. If this is the case, the high level handler should clear the interrupt and save the data/status (in the driver state structure perhaps) and trigger your soft level interrupt handler (which will run as a thread).
    Blocking of interrupts is done for you when you acquire a spin mutex (ie initialised with an iblock cookie). Such a mutex is required to synchronise access to data shared with a high level handler in your driver.
    Please take a look at the Intel Driver writers orientation at:
    http://soldc.sun.com/developer/support/driver/docs/Solaris_driver_models/index.html
    Hope that helps,
    Ralph
    SUN DTS

  • High-level view of steps for 10g OWB-OLAP to Discoverer

    I would greatly appreciate ANY feedback to the following steps. These are not necessarily correct or the best way to do this. I am attempting to take source data, use OWB, create the analytical workspace, and from there have the metadata available and used by Discoverer.
    This is rather high-level, feel free to jump in anywhere.
    We are trying to see if we can get away with NOT using the Analytical workspace manager (AWM) if possible. With that in mind, we are trying to make the most of the process with OWB & OLAP.
    Is this possible to do without ever using the AWM? Can we go end to end (source data--->discoverer final reporting) primarily using OWB to get to the point where we can use the metadata for Discoverer?
    Can anyone relate experiences perhaps that would make me want to consider using the AWM at certain points instead?
    Most importantly, if I do use this methodology, would I be safe after everything has been setup? WOuld I want to consider using AWM at a later point for performance reasons while I am using Discoverer? Or would OWB be helpful as well in some aspects in maintenance of data? Any clue on how often I might need to rebuild, and if so, what to use in that case to minimize time?
    Thanks so much for any insight or opinion on anything I have mentioned!

    Hi Gregory,
    I guess the answer is that this depends. My first question is whether you are looking at a Relational OLAP or Multi Dimensional OLAP solution? This may change the discussion slightly, but just lets look at some thoughts:
    In essence you can use the OWB bridge to generate the AW objects (cubes etc). If you do that (for either ROLAP or MOLAP) you will get the AW objects enabled for querying, using any OLAPI query tool, like BI Beans or the new Discoverer for OLAP. The current OWB release does not run the discoverer enabler (creating views specifically written for EUL support in Disco classic).
    So if you are looking at Disco classic you must use the AWM route...
    The other things that you must be aware off is that the OWB technology is limited to cloning the relational objects for now. This means that you will create a new model based on your existing data. If you want to tweak the objects generated you will probably need to go to the underlying code in either scenario.
    So if you want to create calculated measures for example you could generate a cube with OWB, create a "dummy measure" and add the formula in OLAP DML. The same will go for some other objects you may want to create such as text measures...
    The benefit of creating place holder or dummy measures is that the metadata is completely in order, you simply change the measure's behavior...
    For the future (the beta starts relatively soon) OWB will support much more modeling, like logical cubes and you can then directly deploy to OLAP. Also the mappings are transparent to the storage. So you map to a logical cube and OWB will implement the correct to load either OLAP or relational targets.
    We will also start supporting calculated measures, sparsity definitions, partioning and compression on cubes, as we will support parallel building of cubes.
    Hope this gives you some insight!
    Jean-Pierre

  • APO DP: Disaggregation to product&plant level from higher levels.

    Hi.
    We do demand planning on groups of products and for country/region in general, we have around 48.000 CVC's in our current setup. It works very well.
    A new situation has arisen where we need to have the forecast split down to product and plant level.
    As is we simply don't have the information at this level of granularity.
    I don't see how we can add for instance product to our setup, we have around 20.000 products so the number of CVC's in DP would become massive if we did this.
    I was thinking that perhaps something could be done by exporting the relevant key figures to a new DP setup with fewer characteristics (to keep the number of CVC's down) via some infocubes, perhaps some disaggregation could be done via some tables and the BW update rules. This still leaves the issue of how to get the figures properly disaggregated to plant and product though.
    Does anyone have experiences on how to get the figures split to lower levels from DP when you're planning on a higher level?

    Simon,
    One approach as you mentioned can be creating Z Table where in you set up disaggregation proportion from product group level to product level or product location level.
    Product Group X  100       Product A@loc1 10
                                          Product B@loc1 90
    Download your planning area data into infocube C and then use BW routines to convert the data from group in infocube C to lower level refereing Z Table....into another infocube..
    SAP also provides such standard functionality of spliting the aggregate Demand plan to detailed level
    SNP plan..through functionality like location slit or product split.
    Essential you will be using same concept in yor BW solution or you may also want to consider the
    release your DP to SNP planning area its as solution of diaggregation of data  to lower level.
    Regards,
    Manish

  • Adding higher level key words?

    Greetings fellow LR4 users -
    I already have several kewords set up in lightroom 4.2 (Win7). I want to add a 'higher level' keword to include some of those keywords already present. For example - If I already have
    Dog
    Cat
    Bird
    How can I add 'Animals' so that the result is -
    Animals
         Dog
         Cat
         Bird
    I know that I can add 'lower-level' kewords to existing keywords. Perhaps this is not possible? I can't find any options in dialog boxes that will allow this.
    Thanks in advance for your thoughts/suggestions.
    Ron

    Create a keyword that says "Animal". Then drag "Dog" onto the "Animal" keyword, it becomes a sub-keyword to "Animal". Repeat for other desired sub-keywords.

  • Bb z3 in high level service center

    many issues with my bb z3 like disconnecting wifi itself, Wi-Fi direct not working, blootuth tethering, deleting contects automatically. I sent my mobile to RIM ( a higher level service center ) to Bangalore. One month passed but i have no info regarding the phone...i worrying. Please friends help me if someone have any idea when they will return my phone.

    Hi anuj1,
    Your best bet is to continue to follow up and visit the service center where you dropped off your device. As they may not necessarily be run by BlackBerry, but may be an affiliate of BlackBerry, they will know the current state of your device and be able to provide you with updates.
    If they have provided you with a ticket number or contact information, follow up with them to get more information. Otherwise, I would recommend visiting them in person.
    Best of luck with your device. Perhaps the issues you're experiencing with your BlackBerry Z3 are still being worked on, or have already been fixed by the service center!
    Did someone help you? Click Like! Did a post solve your issue? Click Accept as Solution!

  • HDMI Audio not working on Q190 (along with all higher level Audio Formats)

    Help, I have been given the run around via support, I cannot get the HDMI audio to work with my Pioneer Surround Sound, only the Intel display audio shows in control panel (Win 8 X64) and the RealteK S/P Dif port and it is not capable of supporting 7.1 sound or bitstreaming or DTS, Dolby HD, Etc. Tec support appears not capable of fixing the issue and wanted to send me to software support and pay. I have only had the machine for 4 days and it has never supported higher level sound.
    Every other device I have (had or currently) connected to the receiver works just fine. I have to figure this out or return the machine, the audio is the most important aspect for me. Besides when you advertise 7.1 support the machine you sell should be able to do it.

    Hey guys,
    I have had this Q190 with the Celeron CPU since last week and I am using XBMC Frodo and the HDMI is connected to my AVR Onkyo TX-NR809 and from the Onkyo to the TV. And the sound is 7.1 with PLIIZ. It works fine. I think it may be some driver problem because Realtek Audio which is in the Q190 works fine with the Win8 preinstalled. Realtek is kind of bad with driver because I lost my wifi after upgrading to Win 8.1. After a few days with no wifi, I found out that the driver was bad, yes it was a Realtek wifi driver but posted by Lenovo for W 8.1.
    I have another friend who also just bought the Q190 and he reported no audio problem so I think it is just a matter of trouble shooting the drive and configuration. I do love the form factor of the Q190.

  • Running a Sub-VI and monitoring data that is generated on a higher level VI

    Hi All, 
    This question must been there before, but I cannot find a suitable answer here on the forums....
    I have a 'top-level' VI that does a lot of things. I also have a sub VI that runs a frequency sweep on a piece of equipment. This is done with a for loop. 
    Problem: 
    I want to monitor/access the data that is generated in the for loop (See attached, the 3 wires within the green circle I want to monitor). 
    2 Questions:
    How can I access the data on the wires (within the loop) from a higher level VI?
    How can I then run this VI in a higher level VI while the higher level VI is continuing and not waiting for the sub-VI to complete?
    I tried using a Que but I cannot seem to get that working. 
    Any suggestions?
    Regards,
    Attachments:
    LV problem.PNG ‏44 KB

    The queue is a good way to move data from a running subVI to another VI.  Your problem is that if the subVI is inside a loop in the main VI, that loop in the main VI cannot iterate until the subVI completes. The solution: have the sub VI running in parallel - not inside - the loop.
    Look at the Producer/Consumer Design Patterns (at File >> New... >> VI >> From Template >> Frameworks >> Design Patterns >> Producer/Consumer.  This may be more than you need at the moment but will show how the parallel code process works.
    Lynn

  • Basic  XML Publisher Question: How to access tags in the higher levels?

    Hi All,
    We have a basic question in XML Publisher.
    We have a xml hierarchy like below:
    <CD_CATALOG>
    <CATALOG>
    <CAT_NAME> CATALOG 1</CAT_NAME>
    <CD>
    <TITLE>TITLE1 </TITLE>
    <ARTIST>ARTIST1 </ARTIST>
    </CD>
    <CD>
    <TITLE> TITLE2</TITLE>
    <ARTIST>ARTIST2 </ARTIST>
    </CD>
    </CATALOG>
    <CATALOG>
    <CAT_NAME> CATALOG 2</CAT_NAME>
    <CD>
    <TITLE>TITLE3 </TITLE>
    <ARTIST>ARTIST3 </ARTIST>
    </CD>
    <CD>
    <TITLE> TITLE4</TITLE>
    <ARTIST>ARTIST4 </ARTIST>
    </CD>
    </CATALOG>
    </CD_CATALOG>
    We need to create a report like below:
    CATALOG_NAME     CD_TITLE     CD_ARTISTCATALOG 1     TITLE1     ARTIST1
    CATALOG 1     TITLE2     ARTIST2
    CATALOG 2     TITLE3     ARTIST3
    CATALOG 2     TITLE4     ARTIST4
    So we have to loop at the level of <CD> using for-each CD. But when we are inside this loop, we cannot access the value of CAT_NAME which is at a higher level.
    How can we solve this?
    Right now, we are using the work-around of set_variable and get_Variable. We are setting the value of CAT_NAME inside an outer loop, and using it inside the inner loop using get_variable.
    Is this the proper way to do this or are there better ways to do this? We are running into troubles when the data is inside tables.

    you can use
    <?../CAT_NAME?>copy past to your template
    <?for-each:CD?> <?../CAT_NAME?> <?TITLE?> <?ARTIST?> <?end for-each?>

  • Where can I find various high level examples of workflows being used

    I am about to start a project with TCS 3.5 and have been participating in the Adobe webinars to help learn components and specific techniques but what I am lacking is an understanding of various workflows I can model my project after or take bits from various sources. Why start with Framemaker in this workflow versus RoboHelp or even Word? Questions like this I think come from experience with the process and I am thinking that what I am getting myself into is a chessgame with all these pieces and don't want to paint myself into a corner by traveling down one route. I have seen this graphic:
    And this one:
    And this one:
    But they are too generic and do not contain enough information to really understand the descision making process one must go through on various projects.
    Can we have a series of webinars made, all with the underlining theme of defining a working process or workflow, by having guests describe how they have or are using this suite in real life on their own projects? One that might include a graphic showing the routes taken through the suite with reasons why?
    My project hopes to make a single source internal site that will tie together various 3D portable industrial coordinate metrology systems (hardware and software). It would be used as a dispersal site for help, communications between users and SME, OEM information, QA requirements, established processes, scripting snipet downloads, statistics, and training (including SOJT). Portable industrial metrology has 8 different softwares that are used and right now about 8 different instruments. These include laser trackers and radars, articulated arms, scanners, structered white and blue light to name a few. The softwares include Spatial Analyzer, Veriserf, CompIT, eMscon, AXYZ to a few there as well. I want to be able to participate and add content to an internal Sharpoint site, push content to users for stand-alone workstations, ePub, capture knowledge leaving the company through attrition, develop easy graphic rich job aid sheets, and aid in evaluations of emergent software and hardware. I would also like to leave the option open to use the finished product as a rosetta stone like translator between the software packages; doing this is the equivelent of doing this in these other software pacages for example.

    PDF is definately a format I want to include, to collaborate with other divisions and SME for one reason, but also for the ease in including 3D interactive target models with in it and portability. I plan on being able to provide individual PDFs that are very specific in their topics and to also use them to disperse user guides, cheat sheets or job aids... something the user may want to laminate on their own and keep with them for reference, printed out. Discussion in these sheets would be drasticly reduced to only the elements, relying heavely on bullet points or steps, usfull graphs, charts and tables... and of course illustrative images. I am thinking that these should be downloadable buttons to print on each topic section, not in a general apendix or such. They would hopefully be limited to one page, double sided 8x10.
    The cheet sheet would have a simplistic flow chart of how or where this specific topic fits in the bigger picture,
    The basic steps,
    Illustrations, equipment, setup
    Software settings for various situations in a table or chart,
    Typical result graph to judge with,
    Applicable QA, FAA regulation settings or concerns,
    Troubleshooting table,
    Topic SME contact info
    On the back, a screen shot infographic of software process
    The trouble here is that I have read that FM has a problem sometimes in succesfully transfering highly structured or formatted material to RoboHelp. Does this then mean that I would take it from FM straight to PDF?
    Our OEM material is very high level stuff... basicly for engineers and not shop floor users... but that is not to say they don't have some good material that could be useful. Our internal content is spread out across many different divisions and continents, with various ways of saying the same thing. This leads QA to interpret the information differently depending where the systems are put to work. We also have FAA requirements that need to be addressed and reminded to the user.
    Our company is starting to also see an exodus of the most knowledagble of the users through retirement. Capturing the knowledge and soft skill packages they have developed working here for 20-30 years is something I am really struggling with. I have only come up with two ideas so far:
    Internal User Web based Forum
    Interviews (some SMEs do not want to make the effort in transfering knowledge by participating in anything if it requires an effort they don't see of benefit to themseleves), to get video, audio or transcription records

  • Assign a Position in CRM to an Organization Unit at a higher level.

    Hi,
    While configuring the Organizational Management in CRM 5.1, I have copied the sales areas created in ECC to CRM.
    The resulting structure is a freely definable Org Unit at the top level called Sales Area (as is was created in ECC) and under that all the Sales Organizations are existing.
    I need to create positions for the Sales Organizations and assign employees to these positions created.
    As per our client requirement, I need to create the positions at the higher level (probably the Org unit level) and assign the employees to these positions created.
    The reason for doing this is that the same employees work across different Sales Orgs.
    Is it possible to create the positions at the highest level so that they can be inherited by the Sales Orgs defined below or do i need to create the positions and the employees separately for each Sales Org.
    I would appreciate if you could answer at the earliest. I need to complete the configuration at the earliest.
    warm regards,
    Rohan Bhate

    Hello,
    Try this SAP CRM: Webclient UI - Framework for this kind of question.
    Regards,
    Fred

  • Table_comparison - how to compare data at a high level

    Hi,
    I have to do data validation at a high level between two tables that I am loading.
    I am trying to use table_comparion transform but the problem is that my target table is at a much lower level than at which I want to compare data. So it has many more columns (both key and data fields) than what I want to compare.
    Does the output of query transform ( which I am using as input into table_comparion) be in the exact same format as comparion table? If not, then can somebody suggest me something else.
    Or how can I compare output of two query transforms ?
    Thanks,
    Saurabh Bansal

    Dear Saurabh,
    Not sure if you have already got the solution to this. If yes please close the thread.
    If not, i would suggest you can use the validation rule to compare the two tables and then based on the PASS or FAIL result can check what needs to be done on the output.
    Do post back if you have got the solution or you need any furthur help or else close the question.
    regards,
    Den

  • Phase out settings at a higher level such as brand or major customer

    Have any of you ever setup phase out assignments at a higher level than product and had it work correctly?  For example, we want to phase out a brand for a major customer.  In other words, a customer is dropping a brand and we don't want statistical forecast generated for that brand customer combination any longer.  I am able to setup the fields in phase out lifecycle settings for product, brand, major account but when i enter the brand and major account I am still getting forecast generated.  It appears to stop for some products within the brand but not all.  Another example is if a customer quits ordering from us I want to setup the major customer to phase out so no forecast is generated.
    If you have done this successfully please let me know.  Or if you would handle these situations in a different manner other than phase out please let me know.  We can do historical adjustments each period but that is a lot of maintenance to do after each period before statistical forecast is generated.
    Thanks
    Steve

    Hi Stephen,
    Life cycle planning works only at the detail level (each CVC), the option of aggregate planning is helpful if you want to phase in or out a certain CVC when you are forecasting at the aggregate level.
    One option is that you should have all products in the "profile assignment for life cycle " section which falls under that brand and customer. you can maintain a file and then automate the upload process in to the "assignment"
    or
    you can try to use the copy functionality in the realignment (/SAPAPO/RLGCOPY) , where you maintain copy factor as NIL and when the stat fcst is generated, you can use this as the next step to zero out -but you would need to maintain them manually.
    or
    easiest and safest wayy would be create a selection for those combination and do not include them in the planning job for stat fcst.
    or
    you can build a customised program to access the PA, PB, Data view and input the selection to zero out the stat fcst KF for that particular selection after the stat fcst run. here you would need to check if the diaggregated values are good enough.
    hope it helps.

  • Recording my voice at a higher level

    Hi All,
    I am recording my voice for a podcast on garageband. I am speaking directly into my mic, which is attached to my computer. How can I record my voice at a higher level. The output is not loud enough and the "track line" is almost flat. How do I change it so I can record at a higher level!
    Thanks!

    Roger Wilmut1 wrote:
    You will need a mixer or a microphone preamplifier. The audio input on Macs is line level - for example the 'tape' output from a hifi - and is insufficiently sensitive for a microphone.
    Hi, Roger,
    I was looking at the Behringer Podcast Studio http://www.bhphotovideo.com/c/product/481377-REG/Behringer_PODCASTUDIO_FIREWIRE_ PODCASTUDIO_FIREWIRE_Bundle.html#features
    and was wondering if this would work for a first-time podcast? Is it better to pick the items up piecemeal? I was thinking of the Behringer 1202 (Xenyx or UB) as it has 2 more (total of 4) XLR inputs and it's going for about $75. However, the Firewire audio interface with the studio package alone is going for *$77*. It seems that the combo would be more economical, but I don't want to sacrfice quality too much.
    Is a Griffin iMic USB audio interface (or a Behringer UCA202 - Low Latency 2 Input / 2 Output USB/Audio Interface with Digital Output) just as good at around $30?
    I already have a several mics, some XLR, some 1/4" jacks -- condenser & dynamic.
    Eventually, I'd like to record a podcast with around 3-5 people all doing a round-table type discussion.
    Thanks for any input, Deborah
    PS like the Behringer mixer because of the CD/tape input, also, so that I can digitize old cassette tapes.

Maybe you are looking for

  • Need help in smartforms page break

    HI Experts, I need yout help in Smartforms. My requirement is I want to check my current page count with total number of pages in smartform. Based on above , I wish to print some text only on the main window of last page.Currently, the text is gettin

  • Strange Transparency flattener preset appearing

    I'm in AI CS3 and when I go to the transparency flattener presets, there's always one in there with japanese characters. I delete it and it comes back next time. What's up with that? Randy

  • Can iPod be set to automatically turn on and off at certain times?

    At my work, I have been using my iPod to play overhead music through the PA system - using the radio feature mostly, but sometimes a playlist. However, I have to remember to turn it on and off everyday, which doesn't always work out too well. Is ther

  • Att Partners: Please log messages using customer ID

    Hello All SAP Business One Partners, In order to bring additional transparency to our reporting systems and to ensure compliance we will being in 2012 Q2 to send notifications where a message is logged to SAP Business One Support using the Partner ID

  • How to calucate days between 2 date

    for example I want to get diff between '17-apr-2007' and '03-apr-2007'