Need ideas for best audio setup on Q180

Hello all.  I've got a Q180 and very nice it is too, apart from one major ommision; there is no audio input/output on the rear panel, only on the front.  I want a setup where I can have speakers plugged in to the rear somehow, but also be able to switch to headphone and mic in the front when I need to skype etc.
I've bought a small usb sound card for the rear panel, which seems to work well, but if I plug headphone into the front, I have to go to the audio control panel to change the default output to the internal audio system, and then change it back when I've finished.
Is there a better way of doing this?  How have others arranged their audio needs with their Q180's?
Thanks for any advice!
Lee

Yes its one of the big ergonomic mistakes they made with the Q180 along with the power button you press everytime you plug something in (corrected on the Q190 at least).
Why would you want analogue output from the rear of a PC, where you cant see it and it looks neat. what a crazy notion (facepalms) Obviously you would want the cable poking out from under the front flap for a far more sleek look?!
Anyway the solution I came up with for one customer was a Bluetooth audio adapter.
A standard usb Bluetooth transmitter plugs in the back of the Q180 and the tiny Bluetooth receiver plugs into the 2.5mm audio in socket on his monitor.
You can find them cheap on Ebay - http://www.ebay.co.uk/itm/7dayshop-Bluetooth-Audio​-Receiver-Adapter-Ref-BTR006-/400449362409?pt=UK_i​...
You may still have some audio jiggery pokery to deal with though.
Sometimes USB equipped mic/headsets are easier to use as they just take priority as they are plugged in and out.

Similar Messages

  • X-Fi Elite Pro: Which mode for best audio output?

    I need your help in determining the best audio output (level and clarity). Up until now, I use the Entertainment mode (6bit/44.KHz) and the Audio Creation mode (in 96KHz, not sure at what bit depth this modes works though...). I need to find the best mode and in-mode settings to achieve maximum audio output level (in terms of dynamic range and dB level)as well as optimal sound clarity. For the latter, I believe that the sound should be unadulterated, so additional enhancements such as EAX, Crystalizer, CMSS etc should not be used...
    Any help will be appreciated

    Hello Hyperion,
    I can try to answer this.
    My experience (and you know that listening to audio and the quality is objectionable), I listen through "Gaming Mode". This seems to be the best "playback sound quality" at least for my ears.
    Additionally, I really don't care for digital output, its not my preference so I have the MAUDIO E66's connected through analog. Simple but seems to be effecti've in terms of listening and not tiring to the ears.
    "the sound should be unadulterated, so additional enhancements such as EAX, Crystalizer, CMSS etc should not be used"... I absolutley agree, to an extent. I setup my elite pro with the basic very "vanilla", but in the end, I do use the Crystalizer (set to "2"). I don't beli've in eqalization or very little, but the reality is unless you have a perfectly designed room acoustically, you will probobly benefit from some form of oversampling such as I stated. I would like others to chime in and share what operating systems they use. For best playback quality, I have found 2008 Server serves me well. But, motherboard, cpu, memory of course plays out as major hardware for consideration.
    Anyway, I'm getting a bit off topic here - would really like to hear from Creative on this. What do they use in testing (mode, software, motherboard, cpu, memory).

  • Need Ideas for Minimalist Website and Image Gallery

    I am looking for ideas for creating a minimalist online image gallery/portfolio? I want to either use html for the site and then flash for the portfolio images or some other method. I want to click on the thumbnails and then a large image will open.
    Should I use Flash, Dreamweaver, Would it be easier to load the images dynamically to a folder. Does anyone have an example they could show me. Thanks.

    I second Mahendra's suggestion for JAlbum Excellent product!  Just so you know, you don't need to use their image hosting.  Simply download the software and install it on your PC.   The beauty of JAlbum is that it generates thumbnails and HTML pages for you from your folder of images.  Lots of different Skins available. CSS can be customized.  And you can't beat the price. 
    Minimalism is nice but on the web, if you don't have some real text in the HTML markup you're essentially invisible to search engines, language translators and web assisting technologies like screen readers.  I realize this is a portfolio, but you want to maintain some degree of visibility so people can find you.  For this reason, Flash sites don't do well.  HTML is preferred.
    Good luck with your project,
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb

  • Need ideas on best ways to convert files for best use on iPad

    Greetings,
    As a tech geek that now has an iPad, I am looking for ways to improve the way I perform daily work functions through using the iPad vice a laptop. I am a regional service manager, and as such spend a lot of time on the road. I am hoping the iPad can make that travel easier.
    I've already purchased programs such as Documents to Go, Print Central, Keynote, and PDF Expert. I even looked at ServiceMax.
    Here is my issue. Currently, our service reports, inspection checklists, and parts ordering forms are all part of one ugly excel file that is 1MB in size and loaded with complicated formulas and macros. I know, having read some in these forums that the macros don't convert well, if at all. I also see that some PDF forms have issues.
    ServiceMax was ideal, but it only interfaces with Salesforce.com and not Oracle, which is what we use.
    Does anyone have a suggestion for a similar APP to ServiceMax, or an app/program that will allow me to custom create service reports that I and my team can utilize?
    Thanks in advance!

    My scan doesn't recognize the above device. If there is no SMNP support in the device, is there anything other than classifying it as a network device that I can do?Also, I have two devices running off of this device by its routerEthernet ports. One is a MS Vista PC, the other is a MS Win 8.1 device. Is it possible to scan through this router to get information about these more moderndevices? If so, how?Likewise, the router is also a wireless access point. I only connect one device to it via Wi-Fi. Can the inventory include information about the WAP?Below are some scan test results for the router.Thanks!Dave-------------------------------------------------------------------------------------------------D-Link DIR-615 Hardware version B2, Firmware 2.25WAN is Comcast Cable Modem RouterTrace Route(15:03:03)Tracing route to 10.1.10.9 over...

  • BFILE: need advice for best practice

    Hi,
    I'm planning to implement a document management system. These are my requirements:
    (0) Oracle 11gR2 on Windows 2008 server box
    (1) Document can be of type Word, Excel, PDF or plain text file
    (2) Document will get stored in DB as BFILE in a table
    (3) Documents will get stored in a directory structure: action/year/month, i.e. there will be many DB directory objects
    (4) User has read only access to files on DB server that result from BFILE
    (5) User must check out/check in document for updating content
    So my first problem is how to "upload" a user's file into the DB. My idea is:
    - there is a "transfer" directory where the user has read/write access
    - the client program copies the user's file into the transfer directory
    - the client program calls a PL/SQL-procedure to create a new entry in the BFILE table
    - this procedure will run with augmented rights
    - procedure may need to create a new DB directory (depending on action, year and/or month)
    - procedure must copy the file from transfer directory into correct directory (UTL_FILE?)
    - procedure must create new row in BFILE table
    Is this a practicable way? Is there anything that I could do better?
    Thanks in adavance for any hints,
    Stefan
    Edited by: Stefan Misch on 06.05.2012 18:42

    Stefan Misch wrote:
    yes, from a DBA point of view...Not really just from a DBA point of view. If you're a developer and you choose BFILE, and you don't have those BFILE's on the file system being backed up and they subsequently go "missing" i would say you (the developer) are at fault for not understanding the infrastructure you are working within.
    Stefan Misch wrote:
    But what about the posibility for the users to browse their files?. This would mean I had to duplicate the files: one copy that goes into the DB and is stored as BLOB and can be used to search. Another copy will get stored on the file system just to enable the user to browse their files (i.e. what files where created for action "offers" in february 2012. The filenames contain customer id and name as well as user id). In most cases there will be less that 100 files in any of those directories.
    This is why I thought a BFILE might be the best alternative as I get both: fast index search and browsing capability for users that are used to use windows explorer...Sounds like it would be simple enough to add some metadata about the files in a table. So a bunch of columns providing things like "action", "Date", "customer id", etc.... along with the document stored in a BLOB column.
    As for the users browsing the files, you'd need to build an application to interface with the database ... but i don't see how you're going to get away from building an application to interface with the database for this in any event.
    I personally wouldn't be a fan of providing users any sort of access to a production servers file system, but that could just be me.

  • Advice for Soon-to-be MacPro Owner. Need Recs for Best Practices...

    I'll be getting a Quad Core 3 Ghz with 1GB of RAM, a 250Gig HD, the ATI X1900 card. It will be my first mac after five years (replacing a well-used G4 Tibook 1Ghz).
    First the pressing questions: Thanks to the advice of many on this board, I'll be buying 4GB of RAM from Crucial (and upgrading the HD down the road when needs warrant).
    1) Am I able to add the new RAM with the 1G that the system comes with? Or will they be incompatible, requiring me to uninstall the shipped RAM?
    Another HUGE issue I've been struggling with is whether or not to batch migrate the entire MacPro with everything that's on my TiBook. I have so many legacy apps, fonts that I probably don't use any more and probably have contributed to intermittent crashes and performance issues. I'm leaning towards fresh installs of my most crucial apps: photoshop w/ plugins, lightroom, firefox with extensions and just slowly and systematically re-installing software as the need arises.
    Apart from that...I'd like to get a consensus as to new system best practices. What should I be doing/buying to ensure and establish a clean, maintenance-lite, high-performance running machine?

    I believe you will end up with 2x512mb ram from the Apple store. If you want to add 4gb more you'll want to get 4x1gb ram sticks. 5gb ram is never an "optimal" amount but people talk like it's bad or something but it's simply that the last gig of ram isn't accessed quite as fast. You'll want to change the placement so the 4x1 sticks are "first" and will be all paired up nicely so your other two 512 sticks only get accessed when needed. A little searching here will turn up explanations for how best to populate the ram for your situation. It's still better to have 5 gigs where the 5th gig of ram isn't quite as fast than 4. They will not be incompatible but you WILL want to uninstall the original RAM, then put in the 4gigs into the optimal slots then add the other two 512 chips.
    Do fresh installs. Absolutely. Then only add those fonts that you really need. If you use a ton of fonts I'd get some font checking app that will verify them.
    I don't use RAID for my home machine. I use 4 internal 500gig drives. One is my boot, the other is my data (although it is now full and I'll be adding a pair of external FW). Each HD has a mirror backup drive. I use SuperDuper to create a clone of my Boot drive only after a period of a week or two of rock solid performance following any system update. Then I don't touch it till another update or installation of an app followed by a few weeks of solid performance with all of my critical apps. That allows me to update quicktime or a security update without concern...because some of those updates really cause havoc with people. If I have a problem (and it has happened) I just boot from my other drive and clone that known-good drive back to the other. I also backup my data drive "manually" with Superduper.
    You will get higher performance with Raid of course, but doing that requires three drives (two for performance and one for backup) just for data-scratch, as well as two more for boot and backup of boot. Some folks can fit all their boot and data on one drive but photoshop and many other apps (FCP) really prefer data to be on a separate disk. My setup isn't the absolute fastest, but for me it's a very solid, low maintenance,good performing setup.

  • Request for best connectivity Setup for WRT 1900ac. DHCP issue

    Hello Customer Care Team.
    I imported the WRT 1900ac router from the USA to my location in India. Apparently your Customer care in Indai don’t give any support for this fantastic router.
    I am  not at your mercy for assistance.
    Please help & guide me in getting the best out of it and with a good setup.
    MY ADSL+2 Modem Router : Netgear
    Internet connectivity as follows:
    Lan IP
    192.168.1.1, subnet : 255.255.255.0
    Dhcp server On : start: 192.168.1.2    End : 192.168.1.254
    Wan IP Setup
    PP0E  LLC/SNAP NAT on 
    Wan Ip auto assigned
    MTU: 1492
    My Linksys  wrt1900ac wireless is setup as follows for 2.4ghz & 5ghz
                              2.4 ghz                                                             5ghz
    Network mode   mixed                                                              mixed
    Security mode    wPA2                                                              WPA2
    Channel width    auto                                                                 auto
    Channel                auto                                                              auto
    Besides above WRT 1900ac has:
     1. NAT enabled
    2.Local network DHCP enabled as :
    DHCP server enabled
    Start IP: 10.185.21.100   end: 10.185.21.199
    IP 10.185.21.X Range
    3. Internet setting on WRT 1900ac as:
      IPv4  Auto DHCP Configuration     MTU Auto
     I want you to tell me if my Linksys WRT 1900ac is configured correctly and the DHCP server IPs are correct.
    When I enter 192.168.1.1 in the browser it takes me to my netgear modem web setup page. So How do I get to make all the modem & the wifi in the range of 192.168.1.1 etc.
    Please help & provide your best solution for fast speed .
    Thanks
    Solved!
    Go to Solution.

    Sorry you didn't get an answer faster but you have to know this is a peer to peer forum. it is users helping other users so not always the quickest way. First off, can you change your ISP router to modem only mode? It is best that way as you will get conflicts such as what you are seeing when there are two routers in the same link. You are getting the ISP router when you enter 192.168.1.1 because that is its address on the LAN side. It appears that your router is getting set to a 10.x IP because of the setup. You have two routers trying to work with DHCP so that is also an issue. If you can change the ISP router to modem only mode then once it is up disconnect it from the router and do a factory reset on your router and make sure it has the IP of 192.168.1.1 and you will then be able to make any changes you need to. the default login is password of admin and leave the user blank.

  • Need help for Disaster Recovery setup and configuration

    Hi Gurus.
    We are planning to have disaster recovery test plan. Normally we setup our network at another site and then restore data there and then do configuration and setup to run SAP system to new network.
    But this is little more time consuming. We are planning to do something which will might save our time.
    We want to create one Application instance on Central instance on current production server. This instance will be "deactivated" and will not run on production server.
    When we will do the disaster recovery, we will restore CI filesystem to Disaster recovery network. That time this new application instance  will aslo get copied. Once this is done, we just need to go to this instance and start SAP System. This way we will save our time of configuring the new instance and then starting it.
    e.g
    Production host -> prdhost1
    Central Instance -> /usr/sap/P01/DVEBMGS00
    and it's subdirectory: data, work, log, sec
    create new application instance -> /usr/sap/P01/D03
    and it's subdirectory: data, work, log, sec
    create new instance profile -> P01_D03_prdhost1, START_D03_prdhost1
    this configuration will be created but SAP instance will not be started. It will be copied to Disaster Recovery new site and in that new network , there we can start this new application instance.
    Please let me know if we can do that ? if yes, are we going to face any issue in production?
    Other than mentioned above, what FileSystem we need to take care for creating new instance?
    Any help will be highly appreciated.
    Thanks in advance
    Best Regards,
    Basis CK

    Hi!
    Sorry but your scenario is a little bit confusing.
    Let me guess if I understood, you want to configure a "virtual" instance to be able to use it in case of DR.
    I don't advise you to do things that way. Why not use virtual hosts and virtual instances like we do when implementing instances within a cluster or metrocluster.
    You must have the license from DR server installed. I you use virtual hosts all DR issues like printers, logon groups, RFC groups, etc... will be solved.
    If you use virtual hosts and virtual instances all you need to do is replicate the environment into DR site, restore the system and start production.
    Of course... you are using UNIX and within UNIX world all these things are much easier.
    Bear in mind that if you have JAVA instances then I think this is the only recommended procedure.
    Cheers,
    FF

  • Need ideas for plug-in re-design

    My DAQ app uses "Plugins".   A Plugin is a VI which conforms to a particular terminal configuration.
    Here is the complete diagram of a simple one:
    The idea is that a plugin is another channel: This one takes the value of TORQUE (in N-m) and the value of SPEED (in RPM), performs some computation, and produces a result POWER (in kW).
    The user can then think of this as just another channel - it shows up in the datastream and the user can plot it, record it, set alarms on it, convert it to an output and send it out the CAN bus, anything he can do with a real channel, he can do with this plugin channel.
    So the above is exactly equivalent to having a POWER transducer.
    They can get much more complex than this.
    My code organizes these so that the prerequisite channels are run first, and then the dependent ones. (a prerequisite channel could also be a plug-in).
    If plugin B depends on plugin A as a prerequisite, I make sure that A runs first, regardless of which comes first in the CONFIG file.
    So the actual DAQ code samples the hardware channels, and then the plugins in a particular order.
    The DAQ is sampling at 10 Hz, and so these plugins run at that rate too.  There may be 100 different plugins, maybe 50 at a time in one test.
    The DAQ runs on a PXI box, running RTOS.  The host is an ordinary PC, running Windows.
    Even though the PXI is running a compiled RTEXE, it still loads the source-code VIs and runs them.
    My data file includes all the recorded data, all the config stuff used to set that up, AND THE PLUGIN VIs THEMSELVES.
    The plugin VI files are read as strings and stored in the datafile as strings, along with their names.
    When reading a data file, I read this string, and store it in a "sandbox" folder with a .VI extension.
    This guarantees that the version I have is the version the data was originally recorded with.
    The user can EDIT the data files, for example changing the scale factor on the SPEED channel from 307.3 RPM/V to 308.5 RPM/V.
    That causes a re-calculation of the SPEED channel.
    It ALSO causes a re-calculation of the POWER channel: My code detects that POWER depends on SPEED and since SPEED changed, I have to run every sample back through the POWER VI to get a new value. (Remember they're not all as simple as this example).
    THAT is why the VIs are stored in the data file.
    ALL OF THAT WORKS. 
    100%
    Like a dream.
    For 8-9 years now.
    EXCEPT...
    When it comes time to change LABVIEW versions.
    MY client has 10,000+ data files, each with 5-50 VIs embedded. Files are spread among 30+ machines, with a central server backup.
    We need to hang on to this data.
    I have a File Updater, which will take datafiles and update them (they are a DataLog file).
    THE TROUBLE IS:
    If I take a recent datafile, recorded with LV2010 (where we've been for 3+ years) and open it with the EXE built with LV2013, there's a problem if it tries to reprocess it.  The  error comes out as 
    Error Code 1126 occurred at Open VI Reference in HDT Open PlugIn.vi->HDT Find Calc Prerequisites.vi->HDT DataFile Prerequisite Manager.vi->HDT DataFile Viewer.vi->HDT.vi<APPEND>
    An error occurred loading VI 'BSFC.vi'.
    LabVIEW load error code 10: VI version (10.0) is too old to convert to the current LabVIEW version (13.0).
    The EXE code, having no COMPILER handy, cannot use the old VI.
    My FILE UPDATER, has a recompiler built into it (it reads the VI string, saves it as a VI file, opens it as a VI, saves it as a VI, then reads it as a string and puts it back where it belongs. 
    But if THAT is built into an EXE, it doesn't work either. Same problem, same error.
    If I run the FILE UPDATER from the LV2013 DevSys, then it's OK.  The re-compile process works fine, and then the EXE can read it and use the VI.
    BUT:
    1... That's painfully slow.  I have a cache handler where I recompile each VI only once and recognize it if I see it again, but still it's a long process.
    2... That doesn't work from an EXE.  My client has 2-3 DevSystems and 30+ computers with just the EXE. But that's where the data files are.
    3... Backup up Data Files on CD or DVD doesn't work.
    SO............  TO make a long story even longer.......
    I'm looking for an alternative.
    REQUIREMENTS:
    1... String-based, I suppose.  I want to type in "Power = Torque * Speed / 9545" and have the code know to fetch the value of  "Power", the value of "Torque", do the math, and produce the result.
    2... Has to execute quickly.  I have a zillion things to do at 10 Hz.
    3... Has to be able to re-order itself.  If I have ANOTHER plug-in channel called Fuel per Watt, and it calculates "FuelPerWatt = FuelFlow / Power * 10.34", then it has to know to execute POWER before it executes FUELPERWATT.
    4... Musr work from an EXE and an RTEXE.
    I've looked at a thing called MathToG, and it works OK for creating a VI from an equation, but it needs VI scripting, and therefore won't work from an EXE
    Any ideas?
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

    Starting at the top, create a class which has no data and two methods - GetPrerequisites and Execute.  Both are dynamic dispatch (use the right-click option in the New menu in the project to create them).
    GetPrerequisites - has the input and output of the parent class so it is dynamic dispatch, and has a single output which is an array of the same class.  You may want error I/O as well.
    Execute - has the input and output of the parent class so it is dynamic dispatch, and has Result and Units outputs.  The prerequisites output is redundant with the previous function, but can be added.  Again, you may want error I/O.
    In the parent class, both of these functions are essentially empty and return default data.  The parent class is used for a template and to provide a parent for the function classes.  As such, set the properties for each of the methods so that children must override them (do this in the class properties dialog box).  Set these up the way you want them up front, since the children must use the exact same connector pane.
    The functions themselves are all created as follows:
    Create a class
    Change its inheritance so it inherits from the parent class - it is now a child
    Use the new menu on the child class to create functions to override the originals
    Delete all code but the terminals in the child functions
    Change the GetPrerequisites code so it returns a constant array of the prerequisite objects.  These will be parallel children of the original parent class.  For your power class, this would be an array containing the torque and speed classes.  The easiest way to create this array is to drag a copy of the torque and speed classes onto the block diagram and use build array.  They will be "coerced" to the parent class, but LabVIEW still knows what they are.
    Change the Execute code so that it does whatever it needs to do.  In general, use the GetPrerequisites function to fetch the prerequisites.  Use a FOR loop to run the parent Execute function on each of these prerequisites.  Dynamic dispatching will execute the correct code.  You now have an array of the results.  Use this, with whatever other functionality you need, to calculate your final values.
    The prerequisites can be dynamically loaded so you do not have a static link to the prerequisite classes.  If you would like to have a text interface, you can dynamically load the individual functions, then use their values in a LabVIEW formula VI for the final calculation.
    If this is still Greek, you may want to walk through the examples of LabVIEW classes and dynamic dispatch to get more background on how to use classes.
    If you would prefer a text string for your units so that you have more flexibility, let me know.  I have an SI unit string validation VI that may be of use.  It works for most unit combinations, provided they are not nested too deep.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Need ideas for a validation concept

    Hello,
    I am a bit helpless about validations on my page!
    I have a page with a lot of items, which are particularly filled with AJAX.
    When a button is pressed these items should be validated before a page process runs. And of course if a validation fails the process should not run.
    The problem is that the error messages of the validations should be displayed in a javascript box and not in the notification area.
    Further I was not able to call a javascript function from a button and the process only runs if the button was pressed an the javascript validation function returns true!
    And one more problem is that the items are filled by AJAX and because of that the value of these items are sometimes different when I try to reference them in the page process to validate the items.
    So I have no idea to create a good validation concept, which creates a javascript box when a validation fails and the page process only runs if all validations return true.
    I hope you have an idea, how I can do that!
    Thank you,
    Tim

    Hello Tim,
    Validation, especially in a web application, should be a two phase process. The first phase should be on the client side, and there you can use JavaScript, especially for the non-db related validations like not null, valid date, numbers only etc. The second phase should take place on the server side, after page submits, mainly because in the "way" from the client to the server things can be changed and tampered with.
    The fact that you are using AJAX doesn't change this practice. Even if you are using select list (and radio groups or checkboxes are on the same category), based on dynamic LOV, do not assure you that the selected values, received by the sever, are legal ones.
    You can use Andy's example for client side validation, but you'll also need server side validation. You should check Patrick blog on that one.
    " The problem is that the error messages of the validations should be displayed in a javascript box and not in the notification area."Why? Server side validation results (at least the built-in ones) can't be displayed in JavaScript boxes, as this is a server side process. APEX practice is that failed validation stops the page processes, so nothing can be changed as long as you have items, which didn't validate correctly. I believe Patrick's solution presents the validation errors in a very friendly manner, although not in a JavaScript box.
    " And one more problem is that the items are filled by AJAX and because of that the value of these items are sometimes different when I try to reference them in the page process to validate the items."You should give us some more specific details on the way you are populating the item, and the page process (and fire point) you are using. In any case, it is possible that your page process relies on session state, and just using AJAX don't necessarily update session state. In these cases, maybe you should initiate session state updates, prior to submitting the page.
    Regards,
    Arie.

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Need fix for soundmax audio card for my HP touchsmart 300 series running windows 7 64 bit OS

    need stereo mix in my soundax controls in order to DJ with my new pc...so far I can only find fix for Vista and XP 64 bit which after download has stereo mic option...can you help?
    annie

    try external USB sound card option, m-audio or creative are a couple off the top of my head you could use. they usually come with a full driver and product suit that  can achieve a much better sound than the soundmax in touchsmart have to offer. 
    HP ENVY 17-j005tx Notebook, HP ENVY Recline 27-k001a, HP ProLiant MicroServer Gen8 G2020T, HP MediaSmart EX495 Server, HP MediaVault 2020, HP ENVY 120 AiO Printer

  • Need Ideas for creating and using Custom Business Object

    Hello Guys,
    I am developing an application which uses a Request->Approve->Create approach for creating Purchase documents.
    Now I am a little puzzled about how to make use of the Business Object BUS2014.
    The application I am developing has its own unique 'Request Number'  (say REQID)  which will point to the Request for Creation of a purchase order.
    Whenever a Request is created (from a Z-Tcode) a workflow needs to be initiated and it has to be sent to the approver.
    The Purchase Document will be created once the approver approves.
    Now my confusion here is, if I use BUS2014, the object will be instantiated only during the final step of the workflow. But I need an instance during the beginning of the Requestor ->Approver negotiations as I am playing with events. These events needs an Object_key.
    How should I proceed here?
    Should I create a new logical Business Object like ZPOREQ where I have the above mentioned REQID as the key?
    And should I have an attribute of type BUS2014 inside the custom BO?
    How will I make use of the methods like BUS2014.Create etc which I may need to create the purchase document?
    Any small direction will be a huge help for me to get used to this wilderness.

    Hi,
    You should continue with the ABAP class idea. The business objects are kind of "obsolete" already, and if there is a need to create a new "object", ABAP classes are the way to go. Business objects are still useful, but I normally use them only when an existing standard business object fulfills the requirements (possibly with slight additions) which is almost never. 
    From my point of view you can use the existing class. Depending on the circumstances I normally have just one class that I use for both workflow and the possible other functionality that is required, but you have to understand that I have this goal in my mind already when starting the development process. As your class most probably has many useful features already (such as you have the header and item data as attributes etc. (if I understood correctly?), these are also useful in in workflow (class attributes will be available in WF container etc.). 
    If you are hesitant to use the same class directly in your workflow, you could also create a new class ZCL_REQUEST_FOR_WF (with the workflow interface), and then simply add your existing class ZCL_WF_REQUEST as an attribute to this new class. Then this new workflow class could include the pure workflow stuff, and your existing class the non-workflow stuff. But this most probably will not make much sense - just implement the if_workflow interface in your existing class (this is just one possibility that you might consider.)
    Regards,
    Karri

  • Need ideas for new PJC/Java Beans

    Hello everybody,
    I am always looking for new ideas about the Java Bean and PJC stuff. Sometimes I have good ideas about this, but sometimes it is quite a pain to find something new and innovative.
    If you have ideas or needs about this, let me know in this thread.
    (Sorry Oracle staff if I am going beyond my rights by using the Forum for that purpose)
    Francois

    Francois ,
    I think that we all need to thank you for all your hard work promoting Oracle Forms.
    Your input and constant support on this forum is greatly appreciated.
    But I personally think that all this needs to be addressed by Oracle.
    Your wrote
    Sometimes I have good ideas about this, but sometimes it is quite a pain to find something new and innovative.
    And I agree with you.
    You can go as far as the Forms let you go...
    Thanks,
    Michael

  • Need Ideas For Storage Units Data Storage Capabilities - Special Attributes? History? 2 Ind-UMs

    Hello experts,
    The company I'm working for is looking into permanently-storing pallet level information (keep a history of the pallet movements) and use RF-scanners to move product in the warehouse. They have made a decision to implement Warehouse Management's Storage Unit Management functionality. They looked into Handling Units as well, but they don't want to use HUs for many reasons.
    They would also like to store 2 units of measure at the Storage Unit level without activating "Catch Weight Management" because that would require them to run parallel systems for a while and also deal with CWM functionality restrictions. Keep in mind that most of the products they handle are catch-weight products, so UM conversions don't work for them.
    Solutions I have in mind:
    For storing 2 independent UMs: Store the quantity of the 2nd unit of measure in a custom table at the Storage Unit level and use custom RF-scanner programs to "receive, issue, scrap, and move" goods in the warehouse. These custom RF-scanner programs would update SAP standard table with 1st (base) UM and also custom table with 2nd UM.
    For permanenlty-storing SU history: Use custom table to to store Storage Unit History. The custom programs created to handle the SUs would update this custom table.
    Last time I checked, the history of a Storage Unit is not recorded in SAP, correct? ONLY in Handling Units, correct?
    OR are there any documents/tables that permanently store the Storage Unit number so they can be queried after the Storage Unit is consumed/issued?
    I know SAP keeps improving its applications with every release, so I'm just looking at my options here.
    Does anyone else have any other ideas on how to approach this other than Using Handling Units & Catch Weight Management?
    Thanks in advance!
    -Mr. Bello
    Message was edited by: Jürgen L

    For storing 2 independent UMs
    Can you please explore the LS26 option there also system is allowing you to change the parameter"Unit of measure" and displays the stock on different UoM, hopefully this will solve your problem and you need not design a custom table to store the value at different UoM level
    For permanenlty-storing SU history:
    System store the storage unit related value in table : LEIN however the moment you do any consumption or movement to non SU the entries goes off from this table hence i think the approach taken by you seems the only one approach  to store the historical data. (Make sure you are taking into consideration of the archiving activity as you proceed further the size of database will get bigger and time to generate any report will take longer then expected

Maybe you are looking for

  • How to use Group by in Row SSRS

    Hi All; I have done Fetch XML report in SSRS as below Waiting time column has the expreesion as = SWITCH ( DateDiff("d",Fields!crmi_ebevent_createdon.Value,Fields!crmi_ebstartdate.Value) < 0, "Less Than 0", DateDiff("d",Fields!crmi_ebevent_createdon.

  • Arrow Left key and arrow Up key and fn key not working properly !? Lenovo G580 PLEASE HELP x_x !

    Hi guys i have a Lenovo G580 im not sure which version but i can tell you its intel i.3  and its got usb3.0 on it and 500gb hdd. 2.20Ghz  4.00RAM.    been days that i try to solve my Fn Key problem and also the arrow left and arrow up keys < i was us

  • Integrated Planning Documentation

    Hi All, Can anybody please tell me how to do documentation in BI Integrated Planning. Actually in BPS we have an option for documentation BPS0- Tools- Object Overview. For IP we can search tables RSPL* in SE11. Other than this any method is there for

  • Generating TOC with Character Styles Referenced

    I have a paragraph style applied so several paragraphs in a document.  This paragraph style has a nested style for the first few words of the paragraph which also have a specific character style assigned to them.  The thing is that I need to be able

  • WLS10.3 and wssp1.1 policies support

    Hi, I try to use SAML policies in OSB 10.3 ( ALSB). But the OSB only supports the wssp1.1 policies and the web services I try to call in the OSB are deployed on a wls 10.3 server which supports the wssp1.2 policies Can I use the wss1.1 policies on a