SB Live 5.1 or Value? SB0

Dear All, I bought a Sound Card: SB Li've 5.. I have used this card well and it has worked 00%. After new installation of my Win XP with SP2 I can not install my sound card anymore. My problem is I lost the Driver-CD and all files that I have downloaded from Creative.com are not working. I have downloaded driver for SB Li've 5. and 5. Gamer and Digital and ..., also the previous file Li'veDrvPack.exe, but no one of them is working!
During the setup I got the message: Setup could not detect any Sound Blaster card on your system.Please ensure that your Sound Blaster hardware is properly installed before running this Setup program.Setup will now exit.
So. My next step was to boot with Linux Knoppix 5.0. DVD. My sound card is working properly. I can hear sound. During the boot Knoppix wrote: Sound Card: SB Li've Value. What this means I do not know, maybe Value and 5. are using same driver?
My next step was to download the WinXP driver for SB Li've Value. I got the same message:Setup could not detect any Sound Blaster?
What can I do now?Jack

Jack,
SB0680 isn't exactly a Li've! series card so those Li've! driver you that's listed in the web won't work. The actual name of the card is Sound Blaster 5.. I would suggest contacting Customer Support and see if they can assist you with a copy of the CD.
Jason

Similar Messages

  • Sounblaster Live! 5.1 Digital SB0

    Hi people !
    My soundcard is a SB0220 and i see on the internet, people dosen't seem to like it. I have some sound problem in 2 games, Rayman 3 and Toca Race 2, but all other game works fine. I would to know how to see if some new drivers or update comes for this card, i had buy this card in a computer store, and we can see, a lot of computer retailers sell this card, same thing of audigy or others ! :smileyindifferent:
    Tanks !

    Check the model number from a sticker on Li've! card board so you get the card properly identified (SB022x) --> BadBoy has some Li've! 5. CD's downloadable @ http://badboy.filefront.com/
    jutapa

  • HDS live streaming to Flash not working

    Adobe Flash Media Server 4.5.5 r4013
    Windows 2008
    Sources:
    http://help.adobe.com/en_US/flashmediaserver/devguide/WSd391de4d9c7bd609-52e437a812a3725df a0-8000.html
    http://www.adobe.com/devnet/adobe-media-server/articles/live-multi-bitrate-video-http-flas h-ios.html
    Live streaming a single or multi-bitrate video over HTTP to Flash does not work. I have followed the instructions on the 2 sources listed above repeatedly, but I can’t get live streaming over HTTP to Flash to work. Live streaming to iOS over HTTP works with no problems (single and multi-bitrate streams).
    I have tried the troubleshooting steps from the following:
    http://help.adobe.com/en_US/flashmediaserver/devguide/WS0432746db30523c21e63e3d12efac195bd -8000.html
    Troubleshoot live streaming (HTTP)
    1.      Services window (Windows): Flash Media Server (FMS), Flash Media Administration Server, and FMSHttpd services are running. ✓
    2.      Verified that the request URL is correct. ✓
    3.      Configured ports:
    a.      Configure Apache to use port 80. Open rootinstall/Apache2.2/conf/httpd.conf in a text editor. Change the line Listen 8134 to Listen 80.
    b.     Configure Flash Media Server not to use port 80. Open rootinstall/conf/fms.ini in a text editor. Remove 80 from the ADAPTOR.HOSTPORT parameter so the parameter looks like the following: ADAPTOR.HOSTPORT = :1935 ✓
    4.      Placed a crossdomain.xml file to the rootinstall/webroot directory. ✓
    5.      In Flash Media Live Encoder, select the Encoding Options tab, choose Output from the Panel options menu, and verify the following:
    a) The value of FMS URL is rtmp://fms-dns-or-ip/livepkgr. If you’re testing on the same server as Flash Media Server, you can use the value localhost for fms-dns-or-ip. ✓
    b) For a single stream, the value of Stream is livestream?adbe-live-event=liveevent. ✓
    c) For adaptive bitrate streaming, the value of Stream is livestream%i?adbe-live-event=liveevent. ✓
    Flash Media Live Encoder uses this value to create unique stream names. To use another encoder, provide your own unique stream names, for example, livestream1?adbe-live-event=liveevent, livestream2?adbe-live-event=liveevent.
    The encoder is showing all 3 streams being published and streaming.
    6. Check Administration Console: the livepkgr application and the 3 streams are running. ✓
    7. Check the logs for errors. Flash Media Server logs are located in the rootinstall/logs folder. The master.xx.log file and the core.xx.log file show startup failures. Apache logs are located in the rootinstall/Apache2.2/logs folder. X
    a)   core00.log: these errors did not occur every time that I tried playing the live stream but these are the only relevant errors in the logs.
    1. 7968 (w)2611179     Warning from libf4f.dll: [Utils] [livestream2] Discarded all queued Media Messages received before first Video Keyframe Message
    2. 7968 (w)2611179     Warning from libf4f.dll: [Utils] [livestream3] Discarded all queued Media Messages received before first Video Keyframe Message
    b) edge00.log:
    13:33:57 4492          (w)2641213 Connection rejected by server. Reason : [ Server.Reject ] : (_defaultRoot_, _defaultVHost_) : Application (hds-live) is not defined.          -
    c) Apache-Error:
    1.     [warn]  Checking if stream is disabled but bootstrap path in event file is empty for event:livepkgr/events/_definst_/liveevent stream name:livestream
    2.     [warn] bootstrap path is in event file is empty for event:livepkgr/events/_definst_/liveevent stream name:livestream1
    As I mentioned, everything works on iOS and FMS seems to be creating all of the stream segments and meta files:
    a.     The 3 streams are being created in: HD:\Program Files\Adobe\Flash Media Server 4.5\applications\livepkgr\streams\_definst_
    b.    FMS is creating the following files in each stream folder (livestream1, livestream2, livestream 3):
    1. livestream1.bootstrap
    2. livestream1.control
    3. livestream1.meta
    4. .f4f segments
    5. .f4x segments
    The appropriate files are also being created in the HD:\Program Files\Adobe\Flash Media Server 4.5\applications\livepkgr\events\_definst_\liveevent folder, in which I have the following Manifest.xml and Event.xml files:
    <manifest xmlns="http://ns.adobe.com/f4m/1.0">
      <media streamId="livestream1" bitrate="200" />
      <media streamId="livestream2" bitrate="500" />
      <media streamId="livestream3" bitrate="1000" />
    </manifest>
    <Event>
      <EventID>liveevent</EventID>
      <Recording>
    <FragmentDuration>4000</FragmentDuration>
    <SegmentDuration>16000</SegmentDuration>
        <DiskManagementDuration>3</DiskManagementDuration>
      </Recording>
    </Event>
    I’ve tried clearing the contents of both streams\_definst_ and events\_definst_\liveevent (keeping the xml files) after restarting the encoder, and creating a different event definst for the streams (liveevent2 for example).
    We have an event in 2 weeks that we would like to stream to both Flash and iOS. Any help in solving this problem will be greatly appreciated.

    One step closer:
    Changed the crossdomain.xml file (more permissive settings).
    Changed the encoding on FMLE to vp6. Working somewhat (don't know what I did to make it start streaming through hds).
    But at least now I can get the individual streams in the set manifest file to work:
    http://localhost/hds-live/livepkgr/_definst_/livevent/livestream1.f4m
    http://localhost/hds-live/livepkgr/_definst_/livevent/livestream2.f4m
    http://localhost/hds-live/livepkgr/_definst_/livevent/livestream3.f4m
    BUT when I try to play the streams through the set manifest file from http://localhost/liveevent.f4m I'm getting the following error:
    "The F4m document contains errors URL missing from Media tag." I'll search the forums to see if anyone else has come across this problem.
    I used the f4m config tool to make the file. These are the file's contents:
    <manifest xmlns="http://ns.adobe.com/f4m/2.0">
      <baseURL>http://localhost/hds-live/livepkgr/_definst_/liveevent/</baseURL>
      <media href="livestream1.f4m " bitrate="200"/>
      <media href="livestream2.f4m " bitrate="500"/>
      <media href="livestream3.f4m " bitrate="1000"/>
    </manifest>
    Thanks

  • Validating Tabular Form Column Against Value From Another Table

    Hi,
    I am brand new to this forum, so please bear with me a little!  I only have a small amount of experience writing PL/SQL, and I've never written Javascript or JQuery before.  I am an Oracle DBA, and I have coding experience in C and PERL, so I do have a solid technical background.  But, I need some advice on the best way to approach a problem.
    I have a database application in Oracle Apex (version 4.2) with a tabular form against a table: let's say Table #1 with cols 1A, 1B, and1C.  I need to ensure that the value entered into col B isn't greater than the value of a column in another table (let's say Table #2 col 2A).  Conceptually, the amount of money available is in Table #2, and the rows of my tabular form are an act of spending money (like orders or invoices), so I need to make sure we don't spend more than we have.  Does that make sense?
    Does anyone have any advice for the best way to do this?  I'm figuring the biggest issue here might be that we have to account for people entering mutliple rows in the tabular form at one time, right?  So, if a person entered 3 orders/invoices, I need a total to make sure they didn't spend more than we have in Table #2.
    I greatly appreciate your help! 
    Best Regards,
    Laurie Baublitz

    Hi!
    You need one process of type ajax callbacks like:
    DECLARE
       l_limit nubmer;
       l_number1 number := apex_application.g_x02;
       l_returnValue VARCHAR2(200);
    BEGIN
       select A2 into l_limit from table2;
       if l_number1 > l_limit then
          l_returnValue := 'LIMIT IS NOT SO BIG';
          if l_returnValue is not null then
             --this will write l_returnValue to the buffer, and the ajax callback will receive this
            htp.p(l_returnValue);
          end if;
      end if;
    END;
    Then you need one javascript on page, code is something like:
    $('input[name=your column in tabular which is change]').live('change', function(){
       //if value of changed field differs from an empty string
       if($(this).val()!=''){
          //put target element in a var to reference it in the ajax success callback
          var num  = $('input[name=your column in tabular with value]');
          $.post('wwv_flow.show',
                 {"p_request"      : "APPLICATION_PROCESS=your ajax callback function",
                  "p_flow_id"      : $v('pFlowId'),
                  "p_flow_step_id" : $v('pFlowStepId'),
                  "p_instance"     : $v('pInstance'),
                  "x01"            : $(this).val(),
                  "x02"            : $(num).val()
                  function(data){
                     if(data !=''){
                     alert(data);
    I can not guarante that code is 100% working, if not you need to do some changes or make an example on apex.oracle.com and provide credentials here.
    Regards,
    drama9346

  • Closing connection

    Hello, I work with jdeveloper. I had 10.1.2 which used java version 1.4.2 and I used a simple connection class to call stored procedures in an Oracle db. I have jdeveloper 10.1.3 which uses java version 1.5.0 and the same class fails when I try to use ResultSets. I use to close the connection then return the resultset to whatever called it, now it tells me the connection is already closed when I try to access the resultset. I know this is why it's failing now, because it works if I don't close anything.
    Can someone tell me how/why this has changed, and what I can do about it?
    Thanks
    Classes that use to work:
    public class dbConn {
    public dbConn() {
    public static Connection getCon()
    InitialContext ctx = null;
    DataSource ds = null;
    Connection conn = null;
    try {
    ctx = new InitialContext();
    ds = (DataSource) ctx.lookup("jdbc/app1");
    conn = ds.getConnection();
    catch(Exception e){
    System.out.println("Connection Error");
    e.printStackTrace();
    return conn;
    public static void closeCon(CallableStatement proc, Connection conn)
    try {
    proc.close();
    conn.close();
    catch(SQLException e){
    System.out.println("Error closing connections: "+e);
    public static void main()
    public static ResultSet GenResSet(String uname, String procName)
    Connection conn = null;
    CallableStatement proc = null;
    ResultSet rs = null;
    String callString = "{ call "+procName+"(?,?) }";
    try {
    conn = dbConn.getCon();
    proc = conn.prepareCall(callString);
    proc.registerOutParameter(1,OracleTypes.CURSOR);
    proc.setString(2, uname);
    proc.execute();
    rs = (ResultSet)proc.getObject(1);
    catch(SQLException e) {
    System.out.println("sql error");
    e.printStackTrace();
    finally {
    dbConn.closeCon(proc,conn);
    return rs;
    }

    Ok thanks. So I can learn.. what exactly is my
    misconception about how it works? My beginner view is
    you stored a cursor set from a database query in a
    result set then you could iterate through it. Why
    does it have to be stored in a collection first? I
    was closing the connection before I returned it, or
    so I thought. It went through the finally code to
    close everytime, but I could still use the results.
    Maybe it wasn't closing them?That's exactly your misconception. After you close the connection, your ResultSet is not valid. In simple words, you need a live connection to fetch values from the ResultSet. In your method, you close the connection in the finally block and return the resultset. You will need to hold the results in something else, which would be some suitable collection.

  • Help! My system crashes very often and without any apparent reason.

    Hello,
    I think I have the same problem as Juju had in a previous thread.
    My sistem has the following:
      Pentium 4 2.4GHz HT (800 MHz FSB),
      MSI 865PE Neo2-LS MB,
      2 x 256MB DDR400 made by PQI,
      Nvidia GEForce 4 MX440 with 64MB RAM (Only AGP 4x),
      SoundBlaster Live! 4.1 (Value)
      Windows XP Pro OS
    First, I couldn't install Win XP with memory in dual channel
    configuration, but only with a single memory module.
    Then, with one module or both modules in single (or dual) channel, I receive a lot of Page Fault errors from DirectX applications (like Unreal 2 and NFS 6), or even from RadLight
    or MediaPlayer with some xvid encoded movies. The system also reboots from time to time. The errors are not so frequent when using only one memory module, but they are
    still present.
    I must say that all these applications were running with no errors on my previous system (which had the same video and sound cards, but a PIII motherboard and SDRAM).
    I have tried different memory settings (even have set timings to the highest possible values - lowest performance) in BIOS but there is no change. I have also disabled HT, but still no result.
    The temperature of the CPU is 45-55 degrees Celsius and the system temparature is 35-40 C.
    Please, can somebody help me with this problem.
    Can it be from the MB, from the memory, or from the video card ? Or even Win XP ?
    Best Regards,
    DaveJones

    Quote
    Originally posted by Neo 2
    Ram has to have a latencey of 2 to run dual channel. So if it is 2.2 and up it will not work.
    Features of Dual DDR memory architecture include:
    Highest memory bandwidth: Dual DDR combines the power of DDR400 with two independent memory controllers, which yields a staggering 6.4GB per second of memory bandwidth¡Xtwice the memory bandwidth of other DDR400 chipsets. Increased memory bandwidth delivers better system and graphics performance, resulting in more overall productivity.
    Lowest latency: Both memory controllers operate concurrently with each other to hide latencies associated with typical chipsets. For example, controller "A" reads or writes to main memory while controller "B" prepares for the next access, and vice versa. As important is the second-generation DASP (dynamic adaptive speculative preprocessor), which has been re-architected for improved performance.
    Most stable and flexible memory system: End-users can now populate higher density DIMMs, up to 1GB each, to utilize the entire 3GB memory address map. This large memory map allows more applications, audio and video streams to coexist without conflict.
    512MB (256MB x 2 modules)
    TRUE PC3200 400MHz
    Pure Copper Heat Spreader ($16 value)
    6 Layer Ultra Low Noise Shielded PCB
    184-pin DIMM
    CAS Latency: 2-2.5
    Optimized for Dual Channel Operation
    Voltage: 2.5v-2.8V
    Sorry to disagree. CAS Latency of 2.5 or even 3 does not prevent you from running in dual channel mode. It may hinder MAT or PAT settings.
    DaveJones if you scroll down to the bottom of this page, you will see a RAM compatibility table, yours does not seem to be on the list which might be the source of your problem:
    http://www.msi.com.tw/program/products/mainboard/mbd/pro_mbd_detail.php?UID=433&MODEL=MS-6728

  • How can I improve my FMS performance?

    Hi...
    I have two FMS 3.5 streaming servers... Both have Windows Server 2003 x64 with 16 GB of RAM...
    I'm streaming from a common storage and my Application.xml file is like this:
    <Application>
      <StreamManager>
        <VirtualDirectory>
          <!-- Specifies application specific virtual directory mapping for recorded streams.   -->
          <Streams>/;L:\media</Streams>
        </VirtualDirectory>
      </StreamManager>
      <DisallowedProtocols>rtmp,rtmps,rtmpt</DisallowedProtocols>
      <!-- Settings specific to runtime script engine memory -->
      <JSEngine>
        <!-- This specifies the max size (Kb.) the runtime can grow to before -->
        <!-- garbage collection is performed.                                 -->
        <RuntimeSize>20480</RuntimeSize>
      </JSEngine>
      <Client>
        <Bandwidth>
          <!-- Specified in bytes/sec -->
          <ServerToClient>2500000</ServerToClient>
          <!-- Specified in bytes/sec -->
          <ClientToServer>2500000</ClientToServer>
        </Bandwidth>
        <MsgQueue>
          <Live>
            <!-- Drop live audio if audio q exceeds time specified. time in milliseconds -->
            <MaxAudioLatency>2000</MaxAudioLatency>
            <!-- Default buffer length in millisecond for live audio and video queue. -->
            <MinBufferTime>2000</MinBufferTime>
          </Live>
          <Recorded>
            <!-- Default buffer length in millisecond for live audio and video, value cannot be set below this by Flash player. -->
            <MinBufferTime>2000</MinBufferTime>
          </Recorded>
          <Server>
            <!-- Ratio of the buffer length used by server side stream -->
            <!-- to live buffer.  The value is between 0 and 1.  To    -->
            <!-- avoid break up of audio, the ratio should not be more -->
            <!-- than 0.5 of the live buffer.                          -->
            <BufferRatio>0.5</BufferRatio>
          </Server>
        </MsgQueue>
         <!--OVERRIDE APPLICATION LEVEL-->
         <!-- Specifies the RTMP chunk size to use in all streams for this     -->
         <!-- application.  Stream content breaks into chunks of this size     -->
         <!-- in bytes.  Larger values reduce CPU usage, but also commit to     -->
         <!-- larger writes that can delay other content on lower bandwidth     -->
         <!-- connections.  This can have a minimum value of 128 (bytes) and     -->
         <!-- a maximum value of 65536 (bytes) with a default of 4096 bytes     -->
         <!-- Note that older clients may not support chunk sizes largee than     -->
         <!-- 1024 bytes. If the chunk setting is larger than these clients can     -->
         <!-- support, the chunk setting will be capped at 1024 bytes.          -->
         <OutChunkSize>3072</OutChunkSize>
         <!--OVERRIDE APPLICATION LEVEL-->
         <!-- An application can be configured to deliver aggregate messages to       -->
         <!-- clients that support them by setting the "enabled" attribute to "true". -->
         <!-- The server will attempt to send aggregate messages to these supported   -->
         <!-- clients based whenever possible.                                        -->
         <!-- When this setting is disabled, aggregate messages will always be broken -->
         <!-- up into individual messages before being delivered to clients.          -->
         <!--  The default is "true".                                         -->
         <AggregateMessages enabled="false"></AggregateMessages>
         <!--OVERRIDE VHOST LEVEL-->
         <AutoCloseIdleClients enable="true">     
              <CheckInterval>60</CheckInterval>
              <MaxIdleTime>1200</MaxIdleTime>
         </AutoCloseIdleClients>
      </Client>
    </Application>
    The Server.xml and fms.ini have default settings.
    My RAM usage is always low... 1,1 GB... and also my CPU usage is low. The FMSCore.exe process reaches 630 MB usage and then stays at this level.
    How can I use my server's RAM or have you any tips or suggestions to improve FMS performance with some special settings?
    I tried to change this in fms.ini:
    SERVER.FLVCACHE_MAXSIZE=500
    to 1000 but the application crashes after it reaches 2GB of RAM.
    Thanks in advance
    best,
    Marco

    Hi,
    Thanks for trying the different settings...
    When we talk of tuning for the best performance, different settings are required for different scenarios, for example, a live broadcast would require aggregation of messages (if latency is OK) but a live conferencing solution might need to disable it. So unless the use case is little briefed it is tough to conclude on any generic settings to improve performance.
    I would also like to comment on the FLVCache size. I recommend it to be set to 1/4 the size of your RAM. And you can safely tweak only the SERVER.MAXFLVCACHESIZE variable without worrying on the other variable (which is in percentage). This is supposed to take priority over all other settings.
    If your use case is a vod, have 1/4 RAM as cache size, and also tweak the video buffer settings for mp4 (if you have mp4 content as well). OutchunkSize is another variable that you may want to tweak (at the cost of CPU and latency). You can aggregate the messages.
    Also, you need to make sure of the client side buffer settings to match your FMS settings as well as your use case requirements.
    Hope it all helps.
    Thank you !

  • Tolerance for Invoice at header level

    Hi,
    In our business process we use unplanned delivery cost for freight. It is allowed upto 5% of purchase order value. If it is more than purchasing need to review with vendor.
    In this scenario I want to set the tolerance limit in LIV when ever invoice value exceeds PO value by 5% system has to block the invoice.
    I used tolerance key PP, but it will not work at header level. It is specific to each PO line item.
    Please advice if there is any we can block the invoices when tolerance limit exceeds the PO value for total goods received.
    Best Regards

    Jaya,
    Unfortunately there are no tolerance keys designed to work in the manner that you desire. All the standard tolerance keys work at the item level.
    The only solution I can think of would be do some sort of validation on the unplanned delivery cost field via a user exit/BADI in MIRO.
    Hope this helps {or probably not:)!}
    H Narayan

  • Hangs a while on NVRAM check, then reboots.

    Hi,
    As the topic mentioned, my computer boots and it starts checking NVRAM as normally. However, since I updated to BIOS 2.20 it hangs for a while on checking the NVRAM. After ~10 seconds the computer reboots and continues as normal (where checking the NVRAM is done instantly).
    Furthermore, the computer crashes every now and then (BSOD/Reboot, depending how I set 'what to do on errors' in Windows). I've seen this problem discussed several times and tried the advices given in those topics. However, no result. It's hard to say if something worked or not, because it happens irregularly (sometimes it works for a week without crashes, sometimes multiple times a day).
    Things I did so far:
    Set DDR Voltage to 2.7 and 2.8 (Currently: 2.7)
    Memtest ran fine without errors for about 8 hours.
    Disabled HT (Currently: Enabled)
    Set performance to slow (Currently: Fast)
    Cleared NVRAM (through BIOS setting).
    System specs:
    MSI 865PE Neo2-LS (BIOS 2.20)
    Intel P4 2.8GHz, 800FSB (HT enabled, no OC)
    Standard PSU 300W, 20A@+3.3v 30A@+5v 13A@+12v
    2x512MB TwinMos DDR400
    MadView Radeon 9600Pro, 256MB
    Creative Live! 5.1 Value
    Samsung 120GB SATA, 8MB Cache
    NEC ND-2500A DVD ReWriter
    AOpen 12xDVD Drive
    Windows XP Pro, SP1
    Misc info:
    CPU temperature: ~60C
    System temperature: ~40C
    Using the built-in ethernet adapter

    BSODs, just saying some general information (no driver, or cause whatsoever) and "STOP 0x000000024 ... " message.
    edit
    Just had another BSOD. This time with a message (seen it before, before the new powersupply):
    Quote
    IRQL_NOT_LESS_OR_EQUAL
    STOP 0x0000000A, 0x00000000, 0x00000002, 0x00000001, 0x80525DC9

  • Row locking issue with version enabled tables

    I've been testing the effect of locking in version enabled tables in order to assess workspace manager restrictions when updating records in different workspaces and I have encountered a locking problem where I can't seem to update different records of the same table in different sessions if these same records have been previously updated & committed in another workspace.
    I'm running the tests on 11.2.0.3.  I have ROW_LEVEL_LOCKING set to ON.
    Here's a simple test case (I have many other test cases which fail as well but understanding why this one causes a locking problem will help me understand the results from my other test cases):
    --Change tablespace names as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    --update 2 records in a non-LIVE workspace in preparation for updating in different workspaces later
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    quit;
    --Now in a separate session (called session 1 for this example) run the following without committing the changes:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --Now in another session (session 2) update a different record from the same table.  The below update will hang waiting on the transaction in session 1 to complete (via commit/rollback):
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    I'm surprised records of different ids can't be updated in different sessions i.e. why does session 1 lock the update of record 2 which is not being updated anywhere else.  I've tried this using different non-LIVE workspaces with similar results.  I've tried changing table properties e.g. initrans with and still get a lock.  The changes to table properties are successfully propagated to the _LT tables but not all the related workspace manager tables created for table T1 above.  I'm not sure if this is the issue.
    Note an example of the background workspace manager query that may create the lock is something like:
    UPDATE TESTWSM.T1_LT SET LTLOCK = WMSYS.LT_CTX_PKG.CHECKNGETLOCK(:B6 , LTLOCK, NEXTVER, :B3 , 0,'UPDATE', VERSION, DELSTATUS, :B5 ), NEXTVER = WMSYS.LT_CTX_PKG.GETNEXTVER(NEXTVER,:B4 ,VERSION,:B3 ,:B2 ,683) WHERE ROWID = :B1
    Any help with this will be appreciated.  Thanks in advance.

    Hi Ben,
    Thanks for your quick response.
    I've tested your suggestion and it does work with 2 workspaces but the same problem is enountered when additional workspaces are created. 
    It seems if multiple workspaces are used in a multi user environment, locks will be inevitable which will degrade performance especially if a long transaction is used. 
    Deadlocks can also be encountered where eventually one of the sessions is rolled back by the database. 
    Is there a way of avoiding this e.g. by controlling the creation of workspaces and table updates?
    I've updated my test case below to demonstrate the extra workspace locking issue.
    --change tablespace name as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    --end of original test case, start of additional workspace locking issue:
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record in both workspaces
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM2');
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = name||'changed2' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this now gets locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record 3 in TESTWSM2
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating LIVE
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating TESTWSM1 workspace too - so all have been updated since TESTWSM2 was created
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating every workspace afresh
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changedA' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changedB' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changedC' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;

  • How do I do back to previous system folder after archive and install?

    Did an archive and install (retaining my existing user account), only to discover it was unnecessary. (Turns out the problem was hardware-related.)
    How do I go back to using my previous system folder?

    FloydianSlip wrote:
    Huh. Hardly seems to be worth "archiving" if you can't go back to the archive.
    The purpose of an *Archive & Install* is to install a fresh, known-good copy of the OS, while preserving in the archive all the files from the previous copy of the installed OS that you might potentially need for some reason. (This is why it is called an Archive & Install.)
    However, since any of these archived files might have been damaged in some way since they were installed or created, or in some way conflict with a freshly installed OS, they cannot be considered "known-good" without further testing & should not be reintroduced haphazardly into the "live" system.
    The value of this install method should be obvious if you consider that if the OS is sufficiently damaged it will not run the computer, either at all or well enough to recover from whatever damage is done. Without this option, you would have to erase the existing startup disk completely (with the *Erase & Install* method), losing everything created or installed besides what is contained in the OS installer.
    The closest equivalents to the kind of archive that you can "go back to" are cloning the entire drive or using the 'restore from Time Machine' option from the installer DVD's Utilities menu, assuming you use Time Machine.

  • Finished Goods at  Moving average Price.

    Hi Experts,
    In my company FG maintained at Moving average price, during the Go-live they fed the value manually in material master of FG.
    Now at the time of production conformation system considering BOM for consumption, but for cost of production system just taking the value fed in FG material master.
    Please explain what is the impact if i maintain my FG at moving average, whether we can change that to standard price.
    Please explain me briefly.
    Regards
    Sreenivas.P

    Hi,
    Ideally Raw Material, packing material should be maintained on Mvg Avg Price for your CO-PC (product Costing). Finished Goods / Semifinished goods should be maintained on Standard price only.
    Actually once you maintain the same on Std price, So after your costing run the system will pick Std. price of that FG / SFG based following :
    1. price of RM / PM from material components defined in BOM
    2.Activity prices defined in KP26
    3. Based on routing / master recipe, system will calculate activity defined for your resource / work center
    4. Overheads from costing sheet
    So, in Std. price of your FG / SFG, system will give all these details. While you go for your process Order / Production order, system will book actual cost of FG / SFG on production order. So, at month end when you run variance cycle, system will also give you variance between plan and actual.
    Hope it will help a lot.
    Thanks & Regards,
    Taral Patel

  • GOOP queues and other stored references

     need to ask this because Im not at the target machine to try it myself until Monday.
    Re Endevo by ref Goop Wizard 3.0.5
    I am using queues in process objects.
    WHen I create the object I create the queue and store the queue reference in the attributes of the object.
    Then I use the queue for controlling public status and shutdown functions etc..
    Question is the following.
    I ran into a glitch today whereby the queue refs and some other refs go invalid while the Goop repository is showing "live" objects.
    The values of the refs are intact but they are invalid
    Anyone have any Idea what I am describing?
    Thank You
    PS: I hope this isn't simply Labview not running. How can that be when I am using the object inspector. I don't yet get it.

    No I do not think it is an error.
    It turns out the queue and DAQ Task references go invalid because of the relationship of the endevo GOOP object to LabVIEW. (or so it seems)  I think it is because the VI that created the object goes idle and so LabVIEW assumes the ref is no longer needed (despite any  options settings. )
    I am seeing the objects in the Goop repository however references to queues and Data acquisition tasks are invalidated as
    soon as the VI that uses them goes idle.
    It does not seem to afffect field point references or VIsa com ports and also all other data in the goop object seems fine. I guess thats because FP and VIsa are only strings
    Its a real snag to me because I did not see it coming. Now I have to create the queue and reestablish the Daq tasks to insure it is valid
    after the create method even if the object exists. 
     I prefer to use the create or look up existing feature of the method because it gives a lot of flexibility.
    Thank You

  • Front page and Hover button

    Hello,
    Quick question. Trying to help a buddy of mine here. He created a web page on Front Page and added a java hover button which creates a class called fphover.class. When he published the page the buttons don't show. When I loaded the page on my PC which has the SDK and all the tools I need to t-shoot, i get the following exception errors.
    java.lang.NoClassDefFoundError: images/fphover (wrong name: fphover)
         at java.lang.ClassLoader.defineClass0(Native Method)
         at java.lang.ClassLoader.defineClass(Unknown Source)
         at java.security.SecureClassLoader.defineClass(Unknown Source)
         at sun.applet.AppletClassLoader.findClass(Unknown Source)
         at java.lang.ClassLoader.loadClass(Unknown Source)
         at sun.applet.AppletClassLoader.loadClass(Unknown Source)
         at java.lang.ClassLoader.loadClass(Unknown Source)
         at sun.applet.AppletClassLoader.loadCode(Unknown Source)
         at sun.applet.AppletPanel.createApplet(Unknown Source)
         at sun.plugin.AppletViewer.createApplet(Unknown Source)
         at sun.applet.AppletPanel.runLoader(Unknown Source)
         at sun.applet.AppletPanel.run(Unknown Source)
         at java.lang.Thread.run(Unknown Source)
    I have verified the location of the class on the ftp server \images\fphover.class. IS there something stupid that we are missing ? Haven't been using java too much so Im getting very rusty. Any help will be appreciated. Want to look good in front of my friend :-D

    This is just a front page hover button. front page creates the class. My question basically is , when the page is published , is it enough to just have that class file on the server and have teh html code point to it.
    and here are the tags btw..
    </applet>
    <applet code="images/fphover.class" codebase="./" width="120" height="24">
    <param name="hovercolor" value="#0000FF">
    <param name="textcolor" value="#FFFFFF">
    <param name="text" value="Sea Otter Live">
    <param name="color" value="#000000">
    <param name="effect" value="glow">
    <param name="url" valuetype="ref" value="http://seaottermusic.com/sea_otter_live.htm">
    </applet>

  • X2008 Bugs and formatting wish list

    Post Author: solver
    CA Forum: Xcelsius and Live Office
    I have been testing out the X2008 trial version for a few days now. I found it more efficient to use than 4.5, but I've encountered two bugs (or maybe these are simply default settings I need to change somewhere, either in excel or in xcelsius):- sorting numbers in the columns of a List View component is done based on a string sequence, not the based on the numerical value (e.g. 9 is considered higher on the list than 19)- Cell values that are set to BLANK with a formula in excel (e.g.  if(A1=3,2,"") ) are interpreted correctly as missing values in the line plot on the canvas, but show up as 0 values in the .swf file. (I've tried using NA() instead of "", but that's not recognized by X2008)Both of these bugs are substantial, because they interfere with the fundamental message the dashboard is supposed to convey. Please let me know if there is any way of working around these for now, so that I can show some prototype dashboards to colleagues.I've also run into two formatting limitations that can generally be handled with some working around, but expanded formatting capabilities would be nice to have: - ability to format individual columns of a List View component- more shapes and drawing options so I can set up a context diagrams for the elements of the dashboardCheers,solver      

    Post Author: solver
    CA Forum: Xcelsius and Live Office
    solver:- Cell values that are set to BLANK with a formula in excel (e.g.  if(A1=3,2,"") ) are interpreted correctly as missing values in the line plot on the canvas, but show up as 0 values in the .swf file. (I've tried using NA() instead of "", but that's not recognized by X2008) Note: the full lookup formulas I use are  =IF(INDEX(I12:I57,'Display Parameters'!$F$37)<>"",INDEX(I12:I57,'Display Parameters'!$F$37)*$D$4,""). These worked fine in 4.5, and look correct on the canvas in X2008. However, at runtime all of the "" cells are replaced by 0.solver  

Maybe you are looking for

  • Work area field value not getting populated in table control grid.

    Hi all, I am currently facing an issue where I have declared a variable and have fetched the workarea field name in it. To be exact, the variable contains the workarea name whose value I am finally populating to the table control. Now although the wo

  • How to Delete unwanted Apps for ever?

    Sometimes we download 'apps', which may turn out just duds or NOT as good and functional as we expect them to be. Then we decide to delete and though they are deleted from iPad ( 4th Gen) or iPhone, the same remain in the iTunes and show up at the ti

  • PHP, MSSQL 2000, SERVER 2003 and FLEX 2

    Hi everyone, I tried to connect to MSSQL using PHP. I did not get any error message but the only problem is when I try to test the problem, I get this error message. Everything works fine on my local computer "developer", but when I try to program an

  • Python numeric goes numpy

    Hello everyone, I am using a lot of science packages for python. The current 'standard' package for numerical computation in python is 'python-numeric'. Tomorrow (october 25th) is the official release day for the replacement of numeric, called numpy.

  • Is their a way to unlock the screen to an I Phone that i found, so that I can find out who the phone belongs to

    I found an I Phone & I would like to find out exactly who the phone belongs to,except the owner of the phone has a 4 number lock code on the screen . So my question is ,Is their anyway to unlock this I Phone so that I can figure out exactly who it be