Context switching / Threads

Hello !
The following program is for 3 Threads which do context switching.
Often we get '0' zero for the low priority thread when we run this program.
MY QUESTION IS WHY DO WE GET ZERO ?
As far as my understanding is concerned; even if preemptive multitasking is done by the threads,
the low priority thread should have run through few iterations and thus giving some value other
than zero '0'.
Secondly most of the time we get negative values for the high priority thread. Why is that ?
Is it because of the fact that volatile sets the variable 'running' to some different value ?
The speed of my processor is 1.5GHz.
GOD BLESS YOU.
NADEEM.
// Demonstrates threads priorities.
class Clicker implements Runnable {
  int click = 0;
  String name ;
  Thread t ;
  private boolean running = true ;
  public Clicker(String tname, int p) {
   name = tname ;
   t = new Thread(this, name);
   t.setPriority(p);
   System.out.println("Current Thread is " + t + " " + t.getPriority());
  public void start() {
   t.start();
  public void run() {
   while (running) {
    click++;
  public void stop() {
   running = false ;
class HLPriority {
public static void main(String args[]) {
   System.out.println("Active Count : " + Thread.activeCount());
   Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
   Clicker hi = new Clicker("Hi", Thread.NORM_PRIORITY + 2);
   Clicker lo = new Clicker("Lo", Thread.NORM_PRIORITY - 2);
   System.out.println("Active Count : " + Thread.activeCount());
   lo.start();
   hi.start();
   try {
     System.out.println("Sleeping Thread : " + Thread.currentThread());
     Thread.sleep(10000);
   } catch (InterruptedException e) {
     System.out.println("Main Thread interrupted : " + e);
    hi.stop();
    lo.stop();
   try {
    hi.t.join();
    lo.t.join();
   } catch (InterruptedException e) {
     System.out.println("Interrupted Exception caught : " + e);
   System.out.println("Low priority thread  : " + lo.click);
   System.out.println("Hi  priority thread  : " + hi.click);
}

Hello !
The following program is for 3 Threads which do
context switching.
Often we get '0' zero for the low priority thread when
we run this program.
MY QUESTION IS WHY DO WE GET ZERO ?Presumably because the low priority thread gets no CPU time.
>
As far as my understanding is concerned; even if
preemptive multitasking is done by the threads,
the low priority thread should have run through few
iterations and thus giving some value other
than zero '0'.You can't make ANY assumptions about when or how much CPU time a given thread will get. Why don't you let your main thread sleep longer--a minute or 5 or ten--and see if LO gets some cycles then.
Secondly most of the time we get negative values for
the high priority thread. Why is that ?count = Integer.MAX_VALUE;
count++; // --> Integer.MIN_VALUE (-2^31)
I guess 10 seconds is enough time for a thread in a tight loop to count to 2 billion.
Is it because of the fact that volatile sets the
variable 'running' to some different value ?Volatile does nothing of the sort, and, in any case, isn't even in your code.
GOD BLESS YOU.I didn't sneeze.

Similar Messages

  • Forcing context switching between Labview threads

    Hello,
     I have 2 threads interfacing with 2 serial ports on Labview ( Lets say Thread1 , is responsible for polling COM1 and Thread2 for polling COM2 ).
    By using occurrences, Thread1 executes before Thread2 and context switching is done according to Labview between the threads. Now, I'm looking to do the following :
    As soon as a specific block of code is executed in Thread1, I need Labview to force context switching to execute Thread2. I want to do this in order to synchronize the data received from the 2 threads.
    1- So, can this be done, or are there other ways for synchronization ?
    2- What if I want Labview to run as a real-time process or a higher priority thread on Windows XP in order to emulate the real-time effect on a Windows XP and not get any delays ? ( I already changed the process priority from the Process explorer, but there seems no effect. I also changed the Labview threads priority to real-time)
    3- Are there better approached for (1) & (2) ?  
    Thank you,
      Walid F. Abdelfatah 

    wfarid wrote:
    I already used occurrences for Thread1 to execute before Thread2. Does occurrences guarantee that Labview will switch the context to Thread2, as soon as the context is fired ?
    -- Walid 
    NO!
    LV depends on the OS to schedule threads. If you are sticking with Non-RT then you would be better off getting thread one to do the work while it has teh CPU.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • High thread context switching for java web application

    We have been load testing our java web application and observe high cpu usage with 50 users (which doesn't seem practical). The CPU shoots up above 80%. While profiling it with java flight recording (JFR) we see that the context switch rate is 8400 per second (as seen in the Hot threads tab on java mission control). Analyzing the hot threads in jfr, it seems the cpu usage is distributed across the application threads with each thread using less than 3% cpu.
    Increasing the user load to 100, 150 or 200 users we see the cpu shooting up above 90%, the throughput (transactions per second) remaining constant (as seen for 50 users load) while the response time crosses the acceptable threshold values (3 sec). Decreasing the user load to 20 users shows the cpu usage averages out to be above 55%. It certainly isn't true that the application threads are using up the cpu since our application is not a CPU bound application. The Hot Packages tab under Code tab group confirms this by showing that most of the time the application spends in is executing database queries.
    We use glassfish 3.1.2.2 as our application server where the max thread pool is configured to be of 100. Oracle Linux Server release 6.4 is our operating system with linux kernel version as 2.6.39-400.214.4.el6uek.x86_64. I tried executing linux commands namely "watch -n0.5 pidstat -w -I -p " and "watch -n.5 grep ctxt /proc//status" to see the voluntary and involuntary thread context switching at OS level but they don't give any results.
    Suspecting that high context switching could be causing the cpu to shoot up, do you have guidelines on what could be done to confirm that thread context switching is the cause of high cpu and what are there ways to tune the jvm or the application if that's the cause?
    Thanks!

    Kelum -
    We just saw this issue today for the first time. Have you been able to find a cause?
    We upgraded our 32bit Windows operating systems this weekend to use the /3GB flag. Since then, we have seen that our servers have ample heap space, but are dangerously low in PTE memory.
    But when we've been diagnosing the state of the server that produced this error (we run 2 nodes on 3 different computers; only 1 produced this error; the other 5 are working normally), everything looked fine. The server was reporting sufficient PTE availablility, plenty of heap space, and around 172 threads (we expect to be able to run many more than that).
    When we restarted the node, it came up fine and everything appeared to be working normally.
    So I'm looking for any clue as to the root cause, and what kind of resolution to explore. Any clues or pointers would be greatly appreciated.
    Paul Christmann

  • Thread context switching.

    Since I'm working with some real-time applications I'm interested in the thread switching mechanisms of Java ME.
    So my question is when does the context switching take place?
    If a thread A changes the priority of thread B to higher than its own priority will thread B preempt A instantly or is it neccessary to cause the current thread to interrupt?
    How do same priority threads preempt each other?
    What priority is given to the garbage collector?
    Mikael

    Hi Mikael,
    I'm not a ME VM expert, so do not have the deep knowledge. But from what I know:
    - thread switching can take place at any point during execution of java code. That's in interpreted mode. In compiled mode there are some limitations on when it can happen but it still could be described with the words "at almost any point"
    - the scheduler is called to have "fair" policy. That is - all threads get their share of CPU time, the priority defines how big this share is
    - the current thread get preempted when it's exhaused it's currently allocated share of CPU time. next thread is scheduled at that point
    - I don't know whether rescheduling happens when thread priority is modified
    - GC is not a thread and it does not have any priority. It gets invoked according to internal logic which does not depend on logic of the scheduler but rather on heap parameters (configuration and usage)
    Regards,
    Andrey

  • Context switching.

    Theory: Firewalls essentially partition the Java Card platform’s object system into separate
    protected object spaces called contexts. The
    firewall is the boundary between one context and another. The Java Card RE shall
    allocate and manage a context for each Java API package containing applets1. All
    applet instances within a single Java API package share the same context. There is
    no firewall between individual applet instances within the same package. That is, an
    applet instance can freely access objects belonging to another applet instance that
    resides in the same package.
    That is the theory. What happens in my case. My Java Card project contains three packages and in one there is one Java Card applet. Splitting to three package was necessary because the application is large. What about the object instances from other packages. Are they assigned to other context and what happens when java card applet instance access these objects? Is context switching is happening?

    Patrick,
    Don't worry about context switching. Build a good load test. Run it against
    a "best guess" number of exec threads. Increase the number of threads and
    run again. If overall throughput drops, then decrease the number of threads
    and run again. Start with coarse increments (5 threads?) and work from there
    until you get the best setting.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Patrick Acheson" <[email protected]> wrote in message
    news:3d5aae20$[email protected]..
    >
    In setting the executeThreadCount variable for Weblogic 5.10, if thevariable is
    too high there will be a lot of context switching going on. What wouldconstitute
    a lot of context switching as opposed to what would be a normal orexpected amount?
    Our executeThreadCount is set at 100 and we have 4 CPUs. In Perfmon,about 10%
    of the threads show 1 to 2 context switches per second.

  • Dynamicall​y-Launched VIs and Context Switching

    I'm working on a project that dynamically calls instances of a reentrant VI, which I assume is a pretty common practice. Everything works pretty well, until the number of calls to this dynamic VI gets pretty large--on the order of 1000 or more--at which point, we begin to see performance degradation. My guess is that we are taking hits due to context switching, since the number of threads far exceeds the number of logical processor cores available.
    A little more background:
    The dynamic VIs being called effectively run as daemons, each running a while loop and waiting on a dedicated input queue to receive data and save it to disk. All are stopped via a globally shared stop notifier (passed as a ControlValue.Set method argument at launch). Each is waiting on its respective queue with a 1 second timeout so that the stop notifier can be polled. Under normal operating conditions, each one will run at some rate between 0.1Hz and 25Hz (the various rates are a large driving factor for separating them and needing to spawn them dynamically).
    So, this leads me to the following questions:
    Am I correct that the context switching is the likely culprit in the performance degradation?
    If so, is there a fundamental difference in how LabVIEW handles multithreading with dynamic VI calls versus explicitly drawing separate while loops on a block diagram, or dropping multiple instances of a reentrant VI directly on the block diagram?
    Is it likely that reducing the number of dynamic clones to equal the number of available processor cores would improve performance? (the scope of each clone would grow, as it would have to maintain the state information that was original distributed across multiple clones)
    I realize that this question is pretty vague without concrete examples, but I'm hoping someone (AQ? Ben? Any of you NI gurus?) out there could provide some general insight into what's going on under the hood without needing to get too specific.

    TurboPhil wrote:
    Each is waiting on its respective queue with a 1 second timeout so that the stop notifier can be polled.
    There is one relatively easy fix you can probably make here - set the timeout to -1 and destroy the queues to stop the loop (destroying the queue will output an error from the wait primitive). This should at least stop all the code from running all the time, although I'm still not sure how the threading of the different VIs will play with each other. This might be an issue if the queue is only created in the VI, but I'm assuming it isn't.
    Try to take over the world!

  • Server's context switch rate ?

    what does it mean by server's context switch rate

    I've never heard the term. "Context switching" refers to things like pausing one thread or application to give time to another.
    So I'd guess that "server's context switch rate" refers to how frequently the server in question alternates CPU time from one thread or app to the next.

  • Decode Vs Case:  context switching?

    So I was told recently that among other reasons, CASE is "better" than Decode in SQL statements because Decode context switches to PL/SQL to perform the checks.
    I can't find anything in the documentation to support this.
    this site here:
    http://www.dba-oracle.com/oracle_news/2005_11_23_case_decode_machinations.htm
    mentions that one of the disadvantages of decode is that it's post-retrieval, but it also seems to mention that so is CASE.
    anyone have any idea where someone may have got the "context switching" idea from?

    have often wondered why you would use CASE in PL/SQL when it has IF THEN control structures. Yes, you could, however readability would suffer. But what is more important CASE has a form where expression is evaluated only once:
    SQL> SET SERVEROUTPUT ON
    SQL> DECLARE
      2      X NUMBER;
      3  BEGIN
      4      PKG1.CNT := 2;
      5      X := CASE PKG1.F1
      6             WHEN 1 THEN 1
      7             WHEN 2 THEN 2
      8             WHEN 3 THEN 3
      9           END;
    10      DBMS_OUTPUT.PUT_LINE('X = ' || X);
    11      DBMS_OUTPUT.PUT_LINE('PKG1.CNT = ' || PKG1.CNT);
    12  END;
    13  /
    Call to PKG1.F1
    X = 3
    PKG1.CNT = 3
    PL/SQL procedure successfully completed.
    SQL> DECLARE
      2      X NUMBER;
      3  BEGIN
      4      PKG1.CNT := 2;
      5      IF PKG1.F1 = 1
      6        THEN X := 1;
      7      ELSIF PKG1.F1 = 2
      8        THEN X := 2;
      9      ELSIF PKG1.F1 = 3
    10        THEN X := 3;
    11      END IF;
    12      DBMS_OUTPUT.PUT_LINE('X = ' || X);
    13      DBMS_OUTPUT.PUT_LINE('PKG1.CNT = ' || PKG1.CNT);
    14  END;
    15  /
    Call to PKG1.F1
    Call to PKG1.F1
    Call to PKG1.F1
    X =
    PKG1.CNT = 5
    PL/SQL procedure successfully completed.
    SQL> In such case you would have to introduce a temp variable:
    SQL> CREATE OR REPLACE
      2  PACKAGE PKG1
      3  IS
      4  CNT NUMBER;
      5  FUNCTION F1 RETURN NUMBER;
      6  END;
      7  /
    Package created.
    SQL> CREATE OR REPLACE
      2  PACKAGE BODY PKG1
      3  IS
      4  FUNCTION F1 RETURN NUMBER
      5  IS
      6  BEGIN
      7  DBMS_OUTPUT.PUT_LINE('Call to PKG1.F1');
      8  CNT := CNT + 1;
      9  RETURN CNT;
    10  END;
    11  END;
    12  /
    Package body created.
    SQL> SET SERVEROUTPUT ON
    SQL> DECLARE
      2      X NUMBER;
      3  BEGIN
      4      PKG1.CNT := 2;
      5      X := CASE PKG1.F1
      6             WHEN 1 THEN 1
      7             WHEN 2 THEN 2
      8             WHEN 3 THEN 3
      9           END;
    10      DBMS_OUTPUT.PUT_LINE('X = ' || X);
    11      DBMS_OUTPUT.PUT_LINE('PKG1.CNT = ' || PKG1.CNT);
    12  END;
    13  /
    Call to PKG1.F1
    X = 3
    PKG1.CNT = 3
    PL/SQL procedure successfully completed.
    SQL> DECLARE
      2      X NUMBER;
      3      TMP NUMBER;
      4  BEGIN
      5      PKG1.CNT := 2;
      6      TMP := PKG1.F1;
      7      IF TMP = 1
      8        THEN X := 1;
      9      ELSIF TMP = 2
    10        THEN X := 2;
    11      ELSIF TMP = 3
    12        THEN X := 3;
    13      END IF;
    14      DBMS_OUTPUT.PUT_LINE('X = ' || X);
    15      DBMS_OUTPUT.PUT_LINE('PKG1.CNT = ' || PKG1.CNT);
    16  END;
    17  /
    Call to PKG1.F1
    X = 3
    PKG1.CNT = 3
    PL/SQL procedure successfully completed.
    SQL> SY.

  • Reg : Context-switching for built-in functions -

    Hi Experts,
    Asking this question just out of curiosity to know the internal concepts.
    In a SQL query often we use the in-built Oracle functions like LOWER, UPPER, etc.
    In this case, does context-switch happen?
    Will I be able to look into the code of these functions after logging into SYS schema as SYSDBA?
    FYI - I've Oracle XE 11.2 installed in my home pc (currently in office, so don't have access to it).
    Help much appreciated!
    Thanks,
    Ranit

    ranit B wrote:
    Hi Experts,
    Asking this question just out of curiosity to know the internal concepts.
    In a SQL query often we use the in-built Oracle functions like LOWER, UPPER, etc.
    In this case, does context-switch happen?No, because many of these functions are compiled at low level (C language) into the SQL and the PL/SQL engines, so each has their own 'copy' (in theory) to execute without having to context switch to the other engine.
    Will I be able to look into the code of these functions after logging into SYS schema as SYSDBA?No, they are written in C and compiled into the engines.
    In terms of the supplied packages (rather than built in functions), many of those are wrapped by oracle so you can only see the public interface, not the actual body code.

  • Overhead of SQL to PL/SQL context switch using an inline function

    Hi,
    We have a bit of sql in a third party application that uses an inline pl/sql function to do some security checks.
    These security checks are redundant in our system - we don't use the functionality so the result is always true, but the function is always called for each line of output, which is over a thousand for a lot of records.
    The function itself is fairly lightweight in our environment - the tables it uses are empty so each iteration of the function is quite quick (about .1 of a second per query in total, vs 12-15 seconds for the 'main' query). What I was wondering if there is any way of measuring the overhead of just doing the function calls.
    If I do a trace of the session I see the timings and cost of the 'main' sql query, and the breakdown of the 2 sql statements that have been called in the function (with over 1000 executions each) but is there any way to measure how much of the time to execute the main query is spent doing the context switch?
    Regards,
    Carl

    You could knock up some example to show the timings and measure it...
    The following shows an example using context switching from PL/SQL to SQL and back in a loop, which gives an idea of the performance difference...
    SQL> ed
    Wrote file afiedt.buf
      1  declare
      2    v_sysdate DATE;
      3  begin
      4    v_sysdate := SYSDATE;
      5    INSERT INTO mytable SELECT rownum FROM DUAL CONNECT BY ROWNUM <= 1000000;
      6    DBMS_OUTPUT.PUT_LINE('Single Transaction: Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
      7    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
      8    v_sysdate := SYSDATE;
      9    FOR i IN 1..1000000
    10    LOOP
    11      INSERT INTO mytable (x) VALUES (i);
    12    END LOOP;
    13    DBMS_OUTPUT.PUT_LINE('Multi Transaction: Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    14    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    15* end;
    SQL> /
    Single Transaction: Time Taken: 1
    Multi Transaction: Time Taken: 37
    PL/SQL procedure successfully completed.
    SQL>Likewise you could time a query with X number of rows calling a PL/SQL function and not calling a PL/SQL function to see the difference. The more rows you do, the better idea you'll get of the difference.
    ;)

  • FORALL context switching .. how it works ?

    hi guys,
    in the asktom link over here
    <u>http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:17483288166654#17514276298239 </u>
    it was said that
    <i>
    forall i in 1 .. z_tab.count
    insert
    does this:
    a) gather up inpus to insert (the entire z_tab)
    b) perform context switch from PLSQL to SQL
    c) execute insert N-times
    d) perform context switch back from SQL to PLSQL
    </i>
    my question is does FORALL statement loops ?
    does it do this ?
    <b>Example 1</b>
    loop 1
    gather data to insert
    loop 2
    gather data to insert
    loop 3
    gather data to insert
    loop finish
    context switch to sql engine
    perform insert 1 by 1
    or
    <b>Example 2</b>
    loop 1 or no loop at all
    gather all data required for insert
    loop finish
    context switch to sql engine
    perform insert 1 by 1
    my guess is example 1 is the correct answer
    Advices gurus ?
    Regards,
    Noob

    At what level are you asking the question?
    In the context of PL/SQL, there is no loop. But if you pull back the layers and look at the intermediate language the procedure is compiled into, how the PL/SQL VM happens to implement those intermediate language instructions, how Oracle's C code happens to implement the VM, etc. it wouldn't shock me if there was some sort of loop-like construct in at least some level in some version of Oracle in some situation. Particularly depending on what you want to count as a loop at that level (somewhere in the SQL engine's C code, for example, Oracle might well have a loop when you're doing a full table scan, though it's probably not particularly useful to talk about a full table scan being in a loop)
    In addition, why are you asking the question? I cannot envision a functionality or performance difference between the two approaches, so it doesn't seem like something that would have any influence on how you use a particular PL/SQL construct.
    Justin

  • Tab context switch and validators problem?

    Hi! We use JDeveloper 11.1.2.4.0.
    Our application is using Dynamic Tabs UI Shell Template for dynamic tabs. The problem is, when I open a page with validation in one tab (for example: user edit form), and then try to switch context by clicking other open tab (for example: group edit form), it opens the right content (group edit form), but the validation messages for that prevoius tab are popping up (for user edit form). How can I avoid this behaviour? When I switch context, I don't want other validators to execute.
    Regards, Marko

    In this case check Decompiling ADF Binaries: How to Skip Validation? and Andrejus Baranovskis's Blog: Skip Validation for ADF Required Tabs
    Timo

  • How to avoid responsibility context switcher?

    Hi all.
    i developed an OAF page called from JSP. i can navigate from jsp to OAF page successfully.
    and i set pageContext.changeResponsibility() in processRequest(). i checked the information in 'About this page' and Responsibility was set on pagecontext .
    but if i open the OAF page at first time, following message and 'Switch Responsibility' is occurred.
    "The current responsibility context has been switched to: Application Developer".
    i want to avoid this message and poplist on my page. how to avoid them?
    Please let me know.

    Pl see if the solution in MOS Doc 356814.1 (The Current Responsibility Context Has Been Switched To: Manager Self-Service) can help
    HTH
    Srini

  • E7-00 performance issue: screen context switch cau...

    I've already tried reinstalling the firmware, and am considering a hard reset but want to determine how this happened first.  The E7 is a dev phone, I'm more interested in determining how the issue arose and preventing it than I am in fixing it.
    I received the phone yesterday.  I started it up without a SIM card in place and tried to connecty with Ovi Sync to get my contacts onto it.  Ovi Sync was bogging down the mobile, and never seemed to complete (ran for hours) across several reboots, so I removed that account.  This was the only 'glitch' I observed prior to the subject of this post.
    At this point, the device boots reasonalby fast, and Ovi Sync has succesfully sync'd my contacts, calendar, notes, etc.  I've setup a single gmail account as well.  Aside from that it is pretty much pristine.
    The primary symptom is as described in the subject: any event that changes the screen context causes the device to be non-responsive for 5-10 seconds.  seconds.  this is a very long time.  Press the menu button one, wait 5 seconds, touch 'Applications' and wait 5 seconds more, etc etc.  I even get a second or two of delay when swiping between homescreens (not just the normal delay, but several seconds where I hve time to try pressing 'Call' and can wait long enough to observe that this touch event didn't do anything).
    Once an app is running it seems to run just fine.  
    any thoughts?
    N95-1 ---> N97-NAM ---> N900 ---> E7-00 + N900 (I use them both)
    (N95 was pretty good, N97 had potential but utterly failed to deliver, N900 is absurdly good. Those of you wondering, "should I try N900/Maemo/MeeGo"? The answer is a resounding YES)
    Solved!
    Go to Solution.

    You could try rebooting the phone, and also turn off theme effects, my E7 has none of the delays you seem to be having, and part of the problem may be the attempt to sync with no sim. If it continues and you haven't put too much data on it , it may be worth trying tto restore factory settings, and re-stync your contacts and set up your goohle account again ?
    If that doesn't work you may need to visit a care centre 
    Also did you check updates, there was a minor update a couple of days ago which increased performance and had some bug fixes, it may help.
    Good Luck
    If I have helped at all, a click on the White Star is always appreciated :
    you can also help others by marking 'accept as solution' 

  • Dynamically binding VO Parameter from Context Switcher

    Hi guys,
    I am using ADF JSF and BC, and I have a situation here which is:
    1) My JSF page has a panel page component, inside it is a navigable form referencing a View Object.
    2) In the contextSwitcher facet of this PanelPage, i have a SelectInputText which points to a "Globals" VO, having only one row ever (similar to SRDemo)
    3) When the LOV returns value to this field, i must re-bind the navigable form VO Query to this value, and the form must show the first record of the NEW rowset.
    The solution i am writing uses a ValueChangeListener on the SelectInputText, which runs an Application Module method that gets the value from the Globals VO and re-runs the query using "vo.setWhereClause" and "vo.executeQuery".
    However, even though i debug the application and see the query being executed, the form shows strange behavior, as the page then re-runs the query, but swaps the first row of the new rowset with the row currently displayed. For example, suppose that i am seeing row "X" on the page and select a value from the LOV. The new execution fetches rows "Y" and "Z" from DB. When i scroll the page, i see that the first record is "X" again, followed by "Z"!
    Do you have any idea of what could be the problem? Maybe a form clearance issue with ADF? Or i shouldn't be using dynamic binding with UPDATEABLE fields on the page, only when i use READ-ONLY fields?
    Thanks a lot, and regards!
    Thiago

    Thiago,
    you are not showing any of your code and I can only assume that this is a coding problem of yours.
    Frank

Maybe you are looking for

  • Need urgent help on file download servlet problem in Internet Explore

    I have trouble to make my download servlet work in Internet Explore. I have gone through all the previous postings and done some research on the internet, but still can't figure out the solution. <br> I have a jsp page called "download.jsp" which con

  • Difference beetween QAASW and Web Service Connections

    Hi everybody, I'm trying to install a connection between Xcelsius and a QaaWS I created, but it doesn't work. In the list of the different connections available in the Xcelsius Connections Manager, the "QaaWS" connections doesn't appear. Instead of t

  • Copying Macbook iPhoto '09 library to Macbook Air iPhoto '11

    Good evening, I was hoping somebody might be able to help me with a problem I am having with my new Macbook Air. I initially set up my laptop which automatically created a User Account\. I realise now that I should have used Migration Assistant befor

  • How to replace content in text data type?

    How to replace content in text data type?  when we sending the mails we are taking content from the database. The data is maintained in text data type column. Now i have to replace some of content in mail content (text data type column) from database

  • Network Error in Client Full System Restore

    Hi, I've been attempting, as a test, to restore a full system backup to a client computer using a boot flash drive. When I attempt to do so, I get a "Unknown Network Error" at the point where the client has attempted to contact the server, been unabl