Spoke to High Level Support whine

Basically, he said Software update or mass repair for a component.
We shall see.
Jake

No,
Basically, I spoke to customer relations because I was angry again since I got my replaced unit with the whine. They transferred me to tech support but she said he was high level (top tier) ...This guy said its really a shame and that they are working on it, he stated he was just down the hall and there are like 20 or 30 of these things open and engineering is working on it.
He really couldn't say what the deal was - he just told me if its a software update, I'll be emailed by him before it goes public and he'll keep me posted. Otherwise, if it's a mass recall, then I'll know that too.

Similar Messages

  • How to get rid of high level hiss/whine in interview

    I am recording oral histories and interviews for my graduate research. I am using Garageband 2, an iMic, and an omnidirectional mic from Radio Shack. The gentlemen I am working with are in their late 80s and early 90s, so these are softer voices, generally accented, and the recordings are done in a living room or on a kitchen table.
    My problem is that my recordings all have a high pitched whine/hiss when I play them back, making it difficult and downright annoying to listen to them. I have made sure that the imic input is selected in both system preferences and the garage band preferences. I've also switched to recording the vocal real instrument (only using AULOWPASS, all other effects turned off) in mono rather than stereo to see if that would help (it fixed the other problem I had of my ibook pulling sounds from both the on-board mic and the external mic).
    So, what is this high-pitched whine and how do I get it to stop? Is it a mic problem, or is Garageband picking up other sounds I normally don't hear? If so, how do I adjust my levels? Generally it's bad to listen to in Garageband, and even worse when converted to mp3. At times, it almost entirely drowns out my subjects. I make sure the input on the main sys prefs-sound is almost all the way up, or else I tend to have problems hearing the men I'm recording - for another reason the input volume in the track info is always greyed out and set at the lowest level. It's not feedback as far as I can tell, and I'm sure to have the monitor off and mute my output speakers when recording.
    I appreciate any help from anyone who has either had and fixed this problem, or any other suggestions.

    I am using [...] an iMic, and an omnidirectional mic from Radio Shack.
    Not the greatest combination, low cost equipment often gets mediocre results I'm afraid B-(>
    Additionally an omnidirectional mic is going to pick up every noise everywhere in the environment.
    My problem is that my recordings all have a high
    pitched whine/hiss when I play them back,
    You can try fiddling with the standard EQ, GraphicEQ, ParametricEQ, and the various shelf filters, but it's tough to make a recording of questionable integrity sound good.
    recording in mono rather than stereo
    You have to record in mono, you're only using a single mic.
    (it fixed the other problem I had of my ibook pulling sounds from both
    the on-board mic and the external mic).
    That is not physically possible on a Mac, it's one or the other, but not both unless you specifically create an Aggregate device to use both.
    So, what is this high-pitched whine and how do I get
    it to stop?
    You're not going to like this, but, better equipment.
    Is it a mic problem,
    Likely part of it.
    or is Garageband picking up other sounds I normally don't hear?
    You mean "or is the Mic picking up sounds I don't usually hear", and yes, that's possible as well. We get very used to sounds we hear all the time, so much so that we cease to hear them.
    another reason the input volume in
    the track info is always greyed out and set at the
    lowest level.
    http://www.thehangtime.com/gb/gbfaq2.html#volumesliderdimmed

  • "Supports rollup to higher level of aggregation" property

    Hi Gurus,
    I am confused on the significance of check box "Supports rollup to higher level of aggregation", I see it always checked, i could understand from few notes that "It is selected so that data stored at this level can be aggregated to produce the total for its parent level without double counting or leaving anything out". If i uncheck it, i get the error message and RPD becomes inconsistent.
    Please help me by example,
    1. In which scenario we uncheck this option?
    2. How does BI server execute the hierarchy with this option unchecked?
    Thanks,
    Sreekanth Jala

    HI Kevin ,
    Yes there are restrictions on the levels you put the note and the level you can edit the notes.and hence those will be greyed out at certain levels .
    For more details please refer my earlier post [No notes can be processed in the current selection - Error Msg|Re: No notes can be processed in the current selection - Error Msg.]
    Regards,
    Digambar

  • HDMI Audio not working on Q190 (along with all higher level Audio Formats)

    Help, I have been given the run around via support, I cannot get the HDMI audio to work with my Pioneer Surround Sound, only the Intel display audio shows in control panel (Win 8 X64) and the RealteK S/P Dif port and it is not capable of supporting 7.1 sound or bitstreaming or DTS, Dolby HD, Etc. Tec support appears not capable of fixing the issue and wanted to send me to software support and pay. I have only had the machine for 4 days and it has never supported higher level sound.
    Every other device I have (had or currently) connected to the receiver works just fine. I have to figure this out or return the machine, the audio is the most important aspect for me. Besides when you advertise 7.1 support the machine you sell should be able to do it.

    Hey guys,
    I have had this Q190 with the Celeron CPU since last week and I am using XBMC Frodo and the HDMI is connected to my AVR Onkyo TX-NR809 and from the Onkyo to the TV. And the sound is 7.1 with PLIIZ. It works fine. I think it may be some driver problem because Realtek Audio which is in the Q190 works fine with the Win8 preinstalled. Realtek is kind of bad with driver because I lost my wifi after upgrading to Win 8.1. After a few days with no wifi, I found out that the driver was bad, yes it was a Realtek wifi driver but posted by Lenovo for W 8.1.
    I have another friend who also just bought the Q190 and he reported no audio problem so I think it is just a matter of trouble shooting the drive and configuration. I do love the form factor of the Q190.

  • Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of '

    When I deploy the cube which is sitting on my PC (local) the following 4 errors come up:
    Error 1 The datasource , 'AdventureWorksDW', contains an ImpersonationMode that that is not supported for processing operations.  0 0 
    Error 2 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'AdventureWorksDW'.  0 0 
    Error 3 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Customer', Name of 'Customer' was being processed.  0 0 
    Error 4 Errors in the OLAP storage engine: An error occurred while the 'Customer Alternate Key' attribute of the 'Customer' dimension from the 'Analysis Services Tutorial' database was being processed.  0 0 

    Sorry hit the wrong button there. That is not entire solution and setting it to default would work when using a single box and not in a distributed application solution. If you are creating the analysis database manually or using the wizard then you can
    set the impersonation to your heart content as long as the right permission has been set on the analysis server.
    In my case I was using MS Project Server 2010 to create the database in the OLAP configuration section. The situation is that the underlying build script has been configured to use the default setting which is the SQL Service account and this account does
    not have permission in Project Server I believe.
    Changing the account to match the Project service account allowed for a successful build \ creation of the database. My verdict is that this is a bug in Project Server because it needs to include the option to choose impersonation when creating the Database
    this way it will not use the default which led to my error in the first place. I do not think there is a one fix for all in relations to this problem it is an environment by environment issue and should be resolved as such. But the idea around fixing it is
    if you are using the SQL Analysis server service account as the account creating the database and cubes then default or service account is fine. If you are using a custom account then set that custom account in the impersonation details after you have granted
    it SQL analysis administrator role. You can remove that role after the DB is created and harden it by creating a role with administrative permissions.
    Hope this helps.

  • SUS - Added Data Type Enhancement and Higher level Proxies are not active

    Hello,
    I've added a field to our current data type enhancement Z_Purchase_Order_Item.  Once I regenerate the proxy on the enhancement and activate it the field appears as it should in the high level items that use the enhancement (PurchaseOrderRequest_In).  But those proxies have become inactive and when I try to activate them I get this message:
    Interface II_BBPX1_SUS_PO was repaired before the Modification Assistant was enabled. 
    All Modification Assistant functions only apply to future modifications, not to those already
    undertaken.  This means:
    -The modification overview only displays future modifications.
    -When resetting to the standard, the system will reset all objects to their current version, since
    the actual standard can no longer be identified by the Modification Assistant.
    -Support for adjustment after an upgrade will only be available for future modifications. 
    Modifications that already exist must be re-made manually using version management.
    The next message says:
    Object can only be created in SAP package.
    Then the status bar shows "Proxy Activated".  But when I close and reopen the proxy I see that it is once again inactive. 
    Does any know what I need to do to activate this proxy? 
    Thanks,
    Matt

    In SPROXY you can open your proxy and then view the Activation Log under the GoTo menu.  The log will explain better what the problems might be.  In my case I needed to activate another data type enhancement first.
    Thanks,
    Matt

  • Rant: can't get past external 1st level support

    I have been repeatedly told by SAP SMP 1st level support to use SCN for a kernel problem, so I guess next best forum is the local pub to discuss kernel level task handlers...
    Perhaps you have also noticed that sometimes you cannot terminate sessions in SM04? Sometimes when ending a login session with /nex some sessions dont close as expected? Behind these are the function module TH_DELETE_USER which is a wrapper for a kernel function from the task handler family. So I opened a customer message as this is now a problem for me (to ensure that logoff is completed).
    So 1st level support from an external service partner (?) pick up the ticket and for the past month the following game has been going on:
    of course, want to logon to my system... (makes no sense for a kernel function anyway..)
    tell me to upgrade and try again, it might work... (but cannot say why they are feeling lucky today)
    tell me that the FM is not supported (so I should never logoff again...)
    tell me that kernel task handlers is a consulting issue so use SCN...
    So... now I am here and post a link to it in the customer message and would like some help to get past this 1st level external support blockade. Or by some miracle an SCN kernel consultant ninja shows up and explains to me how I should logoff correctly?
    I will post a link to this in the message which I dont see the point in closing, despite 1st level getting worried about their statistics...
    Cheers,
    Julius

    Hi Kristen
    Does SAP support have some way to analyse the free text (maybe HANA comes into play) or the incident attributes to find stats on
    Number of times incident sent back to customer
    Delay between returned by customer/sent first time and responded to
    Free text for phrases like "please escalate"; "As mentioned previously; "Stop telling me to apply notes; "I already tried that" - I'm sure a list of common phrases could be supplied by Coffee Corner
    Search where customer provided system connection details and SAP support person sends call back to request system connections
    Analyse the history to see the common themes on the customer support side. Most of what we raise in SCN discussions like these are documented in the Incident.
    My first time raising support messages (now incidents) had my colleagues tell me not to get my hopes up that my issue will get resolved and to expect my incident sent back to me. Even grabbing facts and data of how many calls are sent for "Rework" might help understand what is an issue that can be fixed versus what is the customer perception.
    In short - maybe SAP support needs to look at changes to their metrics on how they identify issues and then use them as their KPIs to improve their services.
    Unrelated, I wish our SCN reputations could be linked to SAP Marketplace incidents - if someone high up in a space raises a questions and also links the SCN discussion in then it might deserve immediate escalation past level 1. This one here might be wishful thinking
    Regards
    Colleen

  • Why does OWB 9.2 generate UK's on higher levels of a dimension?

    When you specify levels in a dimension, OWB 9.2 generates unique key constraints in the table properties for every level, but only the UK on the lowest level is visible in the configuration properties. Why then are these higher level UK's generated? Is this a half baked attempt to implement the possiblility to generate a snow flake model in OWB?
    Jaap.

    Piotr, Roald and others,
    This is indeed a topic we spend a lot of our time on these past months. We are addressing this as (in my old days I had the same problem as a consultant) we know that this is a common problem.
    So the solution is one that goes in 2 directions:
    - Snowflake support
    - Advanced dimension data loading
    Snowflake is obvious, may not be desired for various reasons but we will start supporting this and loading data for it in mapping.
    If you want a star table, you will know that a completely flattened table with day at the lowest level will not be able to get you a unique entry for month. So what people tend to do is one of the following:
    - Proclaim the first of the month the Month entry point (this stays closest to the star table and simply relies on semantics on both ETL and query side).
    - Create extra day level entries which simbolize the month, so you have a day level with extra entries
    - Create views, extra tables etc to cover the extra data
    - Create a data set within the tables that solves the key problem
    We have opted for the last one. What you need to do for this is a set of records that uniquely identify any record in any level. Then you add a key which links to the dimension at the same point (a dimension key), so all facts always use this surrogate key to link (makes life in query tools easier).
    For a time dimension you will have a set of day records with their months etc in them (the regular star). Then you add a set of records with NULL in the day having months and up. And you go up the hierarchy. For this we will have the ETL logic (in other words you as a designer do not worry about this!). On the query tool you must be a little cautious on counts but this is doable and minor.
    As you can see none of the solutions are completely transparent, but we believe this is one that solves a lot of problems and gives you the best of all worlds. We will also support the same data structure in the OLAP dimensions for the database as well in the relational dimension. NOTE that there are some disclaimers with this as we are doing software here...
    In principal however we will solve your problem.
    Hope this explains some of our plans in this area.
    Jean-Pierre

  • High Level Recommendations For Multi-Tier Application

    Hello:
    I have been reviewing Windows Azure documentation and I'm still somewhat confused/unsure regarding which configuration and set of services is best for my organization.  I will start off by giving a high level description of the what the environment
    should be.
    A) 2 "Front End" IIS Instances, Load Balanced running an MVC 4.0/.Net 4.5 Web Application
    B) A "dedicated" SQL SERVER 2008 R2 server with medium-high resources (ample RAM and processing power)
    C) An application server which hosts a Windows Service.  This service will require access to the SQL Server listed in B. In addition the IIS "Front Ends" listed in A should have access to a "shared" folder or directory where files
    can be dropped and processed by this windows service.
    I have looked at Azure Web Site, Azure Virtual Machines and Cloud Services and I'm not sure what is best for our situation.  If we went with Azure Web Sites, do we need TWO virtual machines, or a single virtual which can "scale out" up to
    6 instances.  We would get a Standard Web Site, and the documentation I see says it can scale out to 6 instances. I'm somewhat confused regarding the difference between a "Virtual Machine" and an "Instance".  In addition, does
    Azure Web Sites come with built in load balancing between instances, virtual machines? both?  Or is it better to go with Azure Virtual Machines and host the IIS Front end there?  I'm just looking for a brief description/advise as to which would be
    better.
    Regarding the SQL Server database, is there a benefit to using Azure SQL Database? Or should we go with a virtual machine with SQL Server installed as the primary template?  We have an existing SQL Server database and initially we would like to move
    up our existing schema to the Cloud.  We are looking for decent processing power for the database and RAM.
    Finally the "application" tier, which requires a Windows Service. Is an Azure Virtual Machine the best route to take? If so, can an Azure Web Site (given that is the best setup for our needs) write to a shared folder/drive on a secondary virtual
    machine.  Basically there will be json instruction files dropped  into a folder which the application tier will pick up, de-serialize and do backend processing.
    As a final question, if we also wanted to use SSRS, is there updated/affordable pricing and hosting options for this as well?
    I appreciate any feedback or advice.  We are definitely leaning towards Azure and I am trying to wrap my head around what our best configuration and service selection should be.
    Thanks in advance

    Hi,
    A) 2 "Front End" IIS Instances, Load Balanced running an MVC 4.0/.Net 4.5 Web Application
    B) A "dedicated" SQL SERVER 2008 R2 server with medium-high resources (ample RAM and processing power)
    C) An application server which hosts a Windows Service.  This service will require access to the SQL Server listed in B. In addition the IIS "Front Ends" listed in A should have access to a "shared" folder or directory where files can be dropped and
    processed by this windows service.
    Base on my experience and your requirement, you could try to use this solution:
    1.Two cloud service to host your "front end" web application. Considering to Load Balanced, You could use traffic manager to set Load Balancing Settings.
    2. About sql server or ssrs, you have two choice:>1,create a sql server vm  >2, use sql azure and azure ssrs
    I guess all of them could meet your requirement.
    3. About your C requirement, which type application is? If it is website, You could host it on azure website or cloud service.
    And if you want to manage the file by your code, I think you could save your file into azure blob storage. You could add,delete file using rest API(http://msdn.microsoft.com/en-us/library/windowsazure/dd135733.aspx
    ) or code(http://www.windowsazure.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs-20/ ). And the Blob storage could be as a share file
    folder.
    And for accurately, about the billing question , you could ask azure billing support for more details.
    try this:http://www.windowsazure.com/en-us/support/contact/
    Hope it helps.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • High level estimation for datasource enhancement

    hi,
    I have been assigned in supporting project recenlttly.  Now we are going to data source enhancement(we need one more field i.e juridiction code for tax calculation, Datasource is 0FI_GL_4), we need to high level esstimation for  the procedure and time duration for each process (Devolepment  to Prod).
    Can any one help me in this concern......
    Thanks,
    Shaliny

    Shaliny,
    If the field is ready available to add with no hicups, everything can be done in 3 days.
    Well, if you need to go for an customer exit during enhancement, you need at least 15 days which includes testing also.
    It generally depends on how complex it is expected to be. But have a buffer of atleast 2 days after completion from your side.

  • High-level interrupt handler

    Why can I decide to support a high-level interrupt or not? Under what condition does the Solaris kernel will map my hw interrupt (INTA from PCI bus) to a high-level interrupt? When should I refuse to support a high-level interrupt? Why? Can I force my hw interrupt to be a high- level interrupt?
    Also think about that, most hw interrupts indicate something important such as the case buffers are full. If they are assigned below the scheduler's, it really does not make sense.
    Is it possible to block any hw interrupts? Or I'd put it this way can I prioritize hw interrupts in Solaris?
    Thanks
    tyh

    Hi,
    On x86 each IRQ has a software priority assigned to it implicitly by the bus driver, although I think you could override it in the driver.conf. Unlike SPARC, the processor doesn't support a PIL so software priorities are implemented by masking all lower-priority IRQs and re-enabling interrupts.
    High priority interrupts, above dispatcher level, run in the context of the current thread on the cpu, normal level interrupts are handled by interrupt threads.
    The interrupt threads are the highest priority threads on the system, so will preempt any other running threads. In addition mutexes in Solaris use priority inheritance, so the interrupt threads will get to run.
    In general, high level interrupts are allocated to devices with small buffers such as serial or floppy, so that their buffers get serviced in the fastest possible time. Others can afford to wait for just a bit.
    Your driver should check to see if its device has been allocated a high level interrupt. If this is the case, the high level handler should clear the interrupt and save the data/status (in the driver state structure perhaps) and trigger your soft level interrupt handler (which will run as a thread).
    Blocking of interrupts is done for you when you acquire a spin mutex (ie initialised with an iblock cookie). Such a mutex is required to synchronise access to data shared with a high level handler in your driver.
    Please take a look at the Intel Driver writers orientation at:
    http://soldc.sun.com/developer/support/driver/docs/Solaris_driver_models/index.html
    Hope that helps,
    Ralph
    SUN DTS

  • High-Level JTS/TopLink design question

    I've gone through the "using JTS with TopLink" docs, and it mostly makes sense. However, I still don't understand how TopLink "knows" when I call acquireUnitOfWork() whether or not I'm participating in a distributed 2PC transaction.
    Said another way:
    Let's say I've got an application based on TopLink (registering appropriate JTS stuff) that exposes an API that can be accessed remotely (RMI, SOAP, whatever).
    And, I've got another, separate application using a different persistence-layer technology (also supporting JTS) that also has an API.
    Now, I create a business method that uses the APIs from both of these applications, and I want them to participate in a single, distributed transaction.
    At a high level (source code is unnecessary), how does that work?
    Would the API need to support an ability to specifiy a TransactionContext or is this all handled behind the scenes by the 2 systems registering with the Transaction Service?
    If this is all handled through registration, how do these 2 systems know that these specific calls are all part of the same XA transaction?

    Nate,
    TopLink particiaptes in JTA/JTS transactions but dows not control them. When you configure TopLink to use the JTA/JTS services of the host application server you are deferring TX control to the J2EE container. TopLink will in this case register each acquired UnitOfWork in the current active TX from the container. The container will also ensure that the JDBC connection provided to TopLink is also bound by the active TX.
    In order to get 2PC you must register multiple resources into the same JTA TX. The TX processing during commit will then make the appropriate call backs to the underlying data source as well as the necessary call backs to listeners suchs as TopLink to have its SQL issued against the database.
    In short: The J2EE container manages the 2PC TX and TopLink is just a participant.
    Doug Clarke

  • High level of transactions...

    I have an app that will need to support a high-level of concurrency. What aspects should I incorporate in my JDBC code when performing updates to the database?
    Thanks in advance!

    no. as you can see from what i posted earlier, you can do transactions on a single connections just using jdbc.
    ejbs buy you the following:
    (1) declarative transactions, where you don't have to embed transactional behavior in your code.
    (2) two-phase commit, where more than one database participates in a transaction.
    you can do both of these things with Spring and its JTA support, so you don't really need ejbs at all.
    %

  • High-level view of steps for 10g OWB-OLAP to Discoverer

    I would greatly appreciate ANY feedback to the following steps. These are not necessarily correct or the best way to do this. I am attempting to take source data, use OWB, create the analytical workspace, and from there have the metadata available and used by Discoverer.
    This is rather high-level, feel free to jump in anywhere.
    We are trying to see if we can get away with NOT using the Analytical workspace manager (AWM) if possible. With that in mind, we are trying to make the most of the process with OWB & OLAP.
    Is this possible to do without ever using the AWM? Can we go end to end (source data--->discoverer final reporting) primarily using OWB to get to the point where we can use the metadata for Discoverer?
    Can anyone relate experiences perhaps that would make me want to consider using the AWM at certain points instead?
    Most importantly, if I do use this methodology, would I be safe after everything has been setup? WOuld I want to consider using AWM at a later point for performance reasons while I am using Discoverer? Or would OWB be helpful as well in some aspects in maintenance of data? Any clue on how often I might need to rebuild, and if so, what to use in that case to minimize time?
    Thanks so much for any insight or opinion on anything I have mentioned!

    Hi Gregory,
    I guess the answer is that this depends. My first question is whether you are looking at a Relational OLAP or Multi Dimensional OLAP solution? This may change the discussion slightly, but just lets look at some thoughts:
    In essence you can use the OWB bridge to generate the AW objects (cubes etc). If you do that (for either ROLAP or MOLAP) you will get the AW objects enabled for querying, using any OLAPI query tool, like BI Beans or the new Discoverer for OLAP. The current OWB release does not run the discoverer enabler (creating views specifically written for EUL support in Disco classic).
    So if you are looking at Disco classic you must use the AWM route...
    The other things that you must be aware off is that the OWB technology is limited to cloning the relational objects for now. This means that you will create a new model based on your existing data. If you want to tweak the objects generated you will probably need to go to the underlying code in either scenario.
    So if you want to create calculated measures for example you could generate a cube with OWB, create a "dummy measure" and add the formula in OLAP DML. The same will go for some other objects you may want to create such as text measures...
    The benefit of creating place holder or dummy measures is that the metadata is completely in order, you simply change the measure's behavior...
    For the future (the beta starts relatively soon) OWB will support much more modeling, like logical cubes and you can then directly deploy to OLAP. Also the mappings are transparent to the storage. So you map to a logical cube and OWB will implement the correct to load either OLAP or relational targets.
    We will also start supporting calculated measures, sparsity definitions, partioning and compression on cubes, as we will support parallel building of cubes.
    Hope this gives you some insight!
    Jean-Pierre

  • High level language

    when a language will be called high level language ?
    is it true if the language support graphics then it is high level ?
    or what are the characteristics should a language posses to be called high level ?
    Fortran ? is it high level ?
    in fact i dont know on what basis or features i will call a language high level ? do u know the answer ?
    thank you

    Hi..
    I just asked about this from Albet Instine (Not sure about spelling) you know the guy with the weird hair.
    NAd he said that is is relative.
    Ex :-
    Compaired to Assembly Fortran is high level. compaired to c c++ is high level and c is also higi level compaired to Assembly.
    But if you talking about generations of languages then there are well defined boundires.
    For example
    1GL - Meshine code (1010101010101010101010)
    2GL - Language will have corresponding code to each executable code that processor understand (Assembly). So the compiling is one-to-one translation of codes.
    3GL - Eache language code will result in multiple processor instructions once compiled.
    4GL - Eache language code will result in multiple processor instructions once compiled. and lots of coding and debuging tools are available (IDEs).
    Note: a 3GL language can later become a 4GL language Ex:- C
    Some experts argue that Object orianted languages are also belong to 4th generation but some says Object oriantation is the 5th Genaration
    5GL - Object Orianted languages Java,C++
    6GL - (Provided that the 5th is the OOP) "Natural Like languages" where the code can be written in a flexible manner
    Correct me if I am wrong.

Maybe you are looking for