Issue with Merge

Hi all,
Facing an issue with merge statement. I am getting source rows like:
Table a: (Source)
Id Vald_date Server_serial system_id
1 11/oct/10 2.5 Real Id
1 13/oct/10 2.5
table b: (Target)
Id Vald_date Server_serial system_id
1 13/oct/10 2.5
I am using the below Merge stmt .
MERGE INTO b
USING ( SELECT DISTINCT id,vald_date,server_serial,system_id
from b
ON ( a.Id = b.ID)
when matched then
update set
b.Vald_date = a.Vald_date,
b.server_serial = a. server_serial ,
b.system_id = a.system_id
when not matched then
INSERT values(a.id,a.Vald_date,a. server_serial ,a.system_id
Here , When i run the stmt first time , target table updated with the first source of a table and SQL%ROWCOUNT return 2 rows.
For the second , if we run the same , it is throwing error 'ORA-30926: unable to get a stable set of rows in the source tables'.
Please tell me the cause why it is happening like that and i'm not sure how it is reading and process the rows itself.
Table scripts:
create table a
(id NUMBER,
vald_date DATE,
server_serial NUMBER,
system_id varchar2(10)
Insert into a values(1,TO_DATE('11/OCT/10','DD/MON/YY'),2.5,'Real Id')
Insert into a values(1,TO_DATE('13/OCT/10','DD/MON/YY'),2.5,NULL)
create table b
(id NUMBER,
vald_date DATE,
server_serial NUMBER,
system_id varchar2(10)
Insert into b values(1,TO_DATE('13/OCT/10','DD/MON/YY'),2.5,'Real Id')
Oracle Version:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE     10.2.0.4.0     Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
Your help would be appreciated !!
Regards,
Vissu......
Edited by: vissu on Oct 29, 2010 5:41 AM
Edited by: vissu on Oct 29, 2010 6:05 AM

As @Dombrooks already said there is a problem with the data. From next time if you could do like the way I have showed it will be much appreciated.
There are quite a few bugs with your code. Until you sort out the rowsource I don't think you will be able to proceed further with the merge statement.
SQL> drop table a;
Table dropped.
SQL> drop table b;
Table dropped.
SQL> create table a
  2  (id NUMBER,
  3  vald_date DATE,
  4  server_serial NUMBER,
  5  system_id varchar2(10)
  6  )
  7  /
Table created.
SQL> Insert into a values(1,TO_DATE('11/OCT/10','DD/MON/YY'),2.5,'Re
  2  /
1 row created.
SQL> Insert into a values(1,TO_DATE('13/OCT/10','DD/MON/YY'),2.5,NUL
  2  /
1 row created.
SQL> create table b
  2  (id NUMBER,
  3  vald_date DATE,
  4  server_serial NUMBER,
  5  system_id varchar2(10)
  6  )
  7  /
Table created.
SQL> Insert into a values(1,TO_DATE('13/OCT/10','DD/MON/YY'),2.5,'Re
  2  /
1 row created.
SQL>
SQL> MERGE INTO b
  2  USING ( SELECT DISTINCT id,vald_date,server_serial,system_id
  3  from b
  4  )
  5  ON ( a.Id = b.ID)
  6  when matched then
  7  update set
  8  b.Vald_date = a.Vald_date,
  9  b.server_serial = a. server_serial ,
10  b.system_id = a.system_id
11  when matched then
12  INSERT values(a.id,a.Vald_date,a. server_serial ,a.system_id
13  );
when matched then
ERROR at line 11:
ORA-00905: missing keyword
SQL>
SQL> ed
Wrote file afiedt.buf
  1  MERGE INTO b
  2  USING ( SELECT DISTINCT id,vald_date,server_serial,system_id
  3  from b
  4  )
  5  ON ( a.Id = b.ID)
  6  when matched then
  7  update set
  8  b.Vald_date = a.Vald_date,
  9  b.server_serial = a. server_serial ,
10  b.system_id = a.system_id
11  when not matched then
12  INSERT values(a.id,a.Vald_date,a. server_serial ,a.system_id
13* )
SQL> /
ON ( a.Id = b.ID)
ERROR at line 5:
ORA-00904: "A"."ID": invalid identifier
SQL> ed
Wrote file afiedt.buf
  1  MERGE INTO b
  2  USING ( SELECT DISTINCT id,vald_date,server_serial,system_id
  3  from a
  4  ) a
  5  ON ( a.Id = b.ID)
  6  when matched then
  7  update set
  8  b.Vald_date = a.Vald_date,
  9  b.server_serial = a. server_serial ,
10  b.system_id = a.system_id
11  when not matched then
12  INSERT values(a.id,a.Vald_date,a. server_serial ,a.system_id
13* )
SQL> /
3 rows merged.
SQL> select * from a;
        ID VALD_DATE SERVER_SERIAL SYSTEM_ID
         1 11-OCT-10           2.5 Real Id
         1 13-OCT-10           2.5
         1 13-OCT-10           2.5 Real Id
SQL> select * from b;
        ID VALD_DATE SERVER_SERIAL SYSTEM_ID
         1 13-OCT-10           2.5 Real Id
         1 13-OCT-10           2.5
         1 11-OCT-10           2.5 Real Id
SQL> MERGE INTO b
  2  USING ( SELECT DISTINCT id,vald_date,server_serial,system_id
  3  from a
  4  ) a
  5  ON ( a.Id = b.ID)
  6  when matched then
  7  update set
  8  b.Vald_date = a.Vald_date,
  9  b.server_serial = a. server_serial ,
10  b.system_id = a.system_id
11  when not matched then
12  INSERT values(a.id,a.Vald_date,a. server_serial ,a.system_id
13  )
14  /
MERGE INTO b
ERROR at line 1:
ORA-30926: unable to get a stable set of rows in the source tables
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionRegards
Raj

Similar Messages

  • Weird issue with merge using JPA...?

    Hi all,
    I have the following code snippet (HsTrans has a manytoone relation with Hss), Hss is the parent
                HsTrans hsTrans = new HsTrans();
             Hss hss=em.find(Hss.class, studyId);
             hsTrans.setStudy(studyId);
             hsTrans.setState(state.toString());
             hsTrans.setTimestamp(timeOfState);
             hss.getHsTrans().add(hsTrans); //Add child to aprent
             hsTrans.setHss(hss);                //Child Parent
                em.merge(hss);  // Does not work Throws a Unique Key violation error
    Whereas replacing the same with this
             Hss hss2=hsStudyTrans.getHsStudy(); 
                em.merge(hss2); // Works fine
       Does anyone know if this is a JPA bug or some other issue.
    Thx
    VR

    RainaV wrote:
    Does anyone know if this is a JPA bug or some other issue.JPA is a specification, so it has no bugs (only design flaws). If there is a bug somewhere it is in the persistence provider you are using, but most likely it is just you not understanding how merge and transaction management in general works. I've written dozens of applications now that use JPA and I think I really needed merge a grand total of once.
    In this snippet you provide, you are apparently making the 'hss' object managed through the em.find() method. This means that any changes you make to it will be made persistent as soon as the transaction is committed. You don't need to call merge() at all, you only call that on entities that are detached.
    Now for the actual problem of unique key violation, I don't even see you committing hsTrans. Change the code to this:
    HsTrans hsTrans = new HsTrans();
    Hss hss=em.find(Hss.class, studyId);
    hsTrans.setStudy(studyId);
    hsTrans.setState(state.toString());
    hsTrans.setTimestamp(timeOfState);
    hsTrans.setHss(hss); // set managed hss object
    em.persist(hsTrans); // persist hsTrans and make it managed
    hss.getHsTrans().add(hsTrans); // add the new managed hsTrans to the hss mapped collection

  • Issue with Merging two files in BPM

    Hi,
    I need to merge two files (Balance and transaction) with correlation is defined from ID, Date and Accountnumber..
    Sometimes, when there are no transaction records, then balanace file will come up number "0"
    Balance file:
    MDk;1728;175;02.09.11;781961.09;0.00;0.00;781961.09;;;;;;;;;0
    MDk;8574;175;02.09.11;4462;1112;104098800;104102150;;;;;;;;;2
    from the above file, two accounts..
    MDk;1728;  --- with zero transaction record
    MDk;8574;  --- with two transaction records
    Transaction file:
    MDk;8574;175;02.09.11;;DEBIT;;;;;-1112;;0;02.09.11;;;;20555;;;037;
    MDk;8574;175;02.09.11;;CREDIT;;;;;104098800;;0;02.09.11;;;;;;;099;
    We are using Correlation to merge the files by using the fields (MDk;8574;175;02.09.11)
    Now the issue is the BPM is not working as the correlation is not matching as the balance file consists of a row (with zero transaction record) which is not present in transaction file.
    I have to ignore the first record in balance file as it contains 0 transaction data which means there are no records for this account in transaction file.
    How can I delete those records before going to merge condition? Is there any thing i can do in balance file adapter?
    Any suggestions please?
    Thanks
    Deepthi

    Hi Ramesh,
    This is the problem at the first step of the BPM where we will receive the files.
    Start> FORK (Rec1 & Rec 2)>TransforMap( Merge_to_targetfile )-->SendtoReceiver  -->END
    It is failing at step1 (Fork), where the files are not matching according to correlation condition which we set.
    ie. ID, Date, Accountnumber.
    As the transaction file doesn't contain the record "MDk;1728;175;02.09.11" which is present in Balance file, the correlation is not  matching. Hence it is failed. It is not even reaching to map
    As correlation is mandatory to receive two matching files, it is failing here..

  • Issues with merging layers with layer styles

    Just downloaded CS6 Photoshop off the creative cloud and the merge layer command does not work. What is happening is there is one layer with a layer style then a shape layer and when I hit Cmd E it merges the layers but the layer style will not merge with it. I was chatting it up and everything they recommended did not work. It works on all my other co-workers computers and we are all on the cloud. I tried renaming pref. folder, settings folder, re-installing, holding down shft+option+cmd on start up. I also created a test account but that turned out to be a major inconvenience. Once logged into the account there is nothing there. The suggestions just started getting more ridiculous as chat proceeded. Nothing.
    Has anyone experienced this issue?
    Does anyone have a solution where I don't have to create a whole new admin account just to see if CS6 Photoshop works correctly?
    P.S.
    Adobe customer portal is about worthless.

    Get Rich or Die Tryin wrote:
    Just downloaded CS6 Photoshop off the creative cloud and the merge layer command does not work. What is happening is there is one layer with a layer style then a shape layer and when I hit Cmd E it merges the layers but the layer style will not merge with it. I was chatting it up and everything they recommended did not work. It works on all my other co-workers computers and we are all on the cloud. I tried renaming pref. folder, settings folder, re-installing, holding down shft+option+cmd on start up. I also created a test account but that turned out to be a major inconvenience. Once logged into the account there is nothing there. The suggestions just started getting more ridiculous as chat proceeded. Nothing.
    There are two merge layers type commands  one is merge down involves only two layers and another that is merge visible layers.  In my opinion there is a problem with the way merge down is implemented. When you use merge down if the bottom layer in the merge down has a layer style the merged layer also has this layer style attached.  This means that pixels from the layer merged into it will be effected by this layer style.  The result may well not look like the composite looked before the merge. The will also happen in CS5. Merge visibile layer works the correct way in my view as the merge progresses the top two layers are merge by applying any layer masks and layer styles first and merging the two layers the resulting layer has no layer masks and no layer style. This result is then merged with the next lower layer with its masks and style applied first. The result is a single layer without mask and style that looks like the original composite view.  If your having the problem with CMD|CTRL+E and the result has a layer style try turning  the layer style visibility off see if that helps.  If it helps some  backup to before the merge down  use merge visible or first rasterzise the bottom layer to be merged.

  • Memory issue with 'Merge to HDR' in CS4

    Hi All
    I use Lightroom 2.3 and have recently tried to use the 'Edit in' function to 'Merge to HDR in Photoshop' with just 3 Raw files.
    PS opens fine and the three images appear as separate layers, the 'Merge to HDR' window then opens allowing me to set the White Point preview etc. but on trying to complete the operation I get the following error message..
    "The operation could not be completed.
    Not enough storage is available to complete this operation"
    I am using;
    -Win Vista 32bit with 3gb of Ram.
    -Intel 2 Quad CPU Q6600 @ 2.4GHz
    -NVIDIA GeForce9600GS (7.15.11.7490 Driver)
    Under 'Preferences'/'Performance' I have 'Memory Usage' set to 1298MB(79%) and the Scratch disk is set to the internal HD with 502.86GB reading as free. (I also have 2 external USB2.0 HDs with 880 & 909GB free respectively.I have tried every permutation of having these combined or individually as the Scratch disk but to no avail.)
    The really annoying/bizarre aspect of this is that if I open Windows Task Manager and watch the Photoshop Memory usage it peaks and stays at 1,247,704K for about 1/1.5minutes during which I continue to get the above error message if I try to complete the Merge process.
    However, if I wait and monitor this memory usage it starts to drop after said period to 1,003,612K after which the Merge process completes no problem??? (Memory usage by PS then drops to 770,384K)
    Whilst I appreciate that my PC is no Ferrari..having to wait over a minute seems more than a little odd. Have I got something set wrong?
    All advice gratefully received......
    Many thanks!

    Hi Chris/Zeno
    Many thanks for the responses...
    After more experimentation, I think I have solved this. If I set all three HDs as Scratch disks (which totals 2285GB!!!) and switch 'Enable OpenGL Drawing' OFF then I can merge as many RAW files as I like with no time delay..
    Even with OpenGL Drawing switched to off, if I only have the internal HD set to be the Scratch disk, then I get the above error message which I would not have expected to see with 500+GB of free space???
    Anyway, it works now...
    Cheers
    JJN

  • Droid X issues with merging touchdown calendar with master phone calendar

    Hello,
    I have the Motorola Droid X.
    I access my work e-mail with touchdown exchange (purchased), which is working fine.
    I'm trying to determine how to get my droid calendar to sync with my touchdown calendar.
    I have tried to set this connection up manually to no avail.
    Any help is greatly appreciated.
    Thanks

    roushc wrote:
    Hello,
    I have the Motorola Droid X.
    I access my work e-mail with touchdown exchange (purchased), which is working fine.
    I'm trying to determine how to get my droid calendar to sync with my touchdown calendar.
    I have tried to set this connection up manually to no avail.
    Any help is greatly appreciated.
    Thanks
    Your best bet is to contact the App author for support of their program.

  • Performance problem with MERGE statement

    Version : 11.1.0.7.0
    I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
    INSERT INTO sch.tab1
              (c1,c2,c3)
    SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
    MERGE INTO sch.tab1 t1
    USING (SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
    ON (t1.c1 = t2.c1)
    WHEN NOT MATCHED THEN
    INSERT (t1.c1,t1.c2,t1.c3)
    VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
    If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
    Is there any known issue with MERGE statement while implementing using above scenario?

    riedelme wrote:
    Are your join columns indexed?
    Yes, the join columns are indexed.
    You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
    >
    BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
    Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL.

  • Issue with "read by other session" and a parallel MERGE query

    Hi everyone,
    we have run into an issue with a batch process updating a large table (12 million rows / a few GB, so it's not that large). The process is quite simple - load the 'increment' from a file into a working table (INCREMENT_TABLE) and apply it to the main table using a MERGE. The increment is rather small (usually less than 10k rows), but the MERGE runs for hours (literally) although the execution plan seems quite reasonable (can post it tomorrow, if needed).
    The first thing we've checked is AWR report, and we've noticed this:
    Top 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    DB CPU           10,086           43.82     
    read by other session     3,968,673     9,179     2     39.88     User I/O
    db file scattered read     1,058,889     2,307     2     10.02     User I/O
    db file sequential read     408,499     600     1     2.61     User I/O
    direct path read     132,430     459     3     1.99     User I/OSo obviously most of the time was consumed by "read by other session" wait event. There were no other queries running at the server, so in this case "other session" actually means "parallel processes" used to execute the same query. The main table (the one that's updated by the batch process) has "PARALLEL DEGREE 4" so Oracle spawns 4 processes.
    I'm not sure how to fix this. I've read a lot of details about "read by other session" but I'm not sure it's the root cause - in the end, when two processes read the same block, it's quite natural that only one does the physical I/O while the other waits. What really seems suspicious is the number of waits - 4 million waits means 4 million blocks, 8kB each. That's about 32GB - the table has about 4GB, and there are less than 10k rows updated. So 32 GB is a bit overkill (OK, there are indexes etc. but still, that's 8x the size of the table).
    So I'm thinking that the buffer cache is too small - one process reads the data into cache, then it's removed and read again. And again ...
    One of the recommendations I've read was to increase the PCTFREE, to eliminate 'hot blocks' - but wouldn't that make the problem even worse (more blocks to read and keep in the cache)? Or am I completely wrong?
    The database is 11gR2, the buffer cache is about 4GB. The storage is a SAN (but I don't think this is the bottleneck - according to the iostat results it performs much better in case of other batch jobs).

    OK, so a bit more details - we've managed to significantly decrease the estimated cost and runtime. All we had to do was a small change in the SQL - instead of
    MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
      USING (SELECT
          FROM TMP_SODW_BB) S
      ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y' AND S.DIFF_FLAG IN ('U', 'D'))
      ...(which is the query listed above) we have done this
    MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
      USING (SELECT
          FROM TMP_SODW_BB AND DIFF_FLAG IN ('U', 'D')) S
      ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y')
      ...i.e. we have moved the condition from the MERGE ON clause to the SELECT. And suddenly, the execution plan is this
    OPERATION                           OBJECT_NAME             OPTIONS             COST
    MERGE STATEMENT                                                                 239
      MERGE                             T_NOTUNIFIED_CLIENT
        PX COORDINATOR
          PX SEND                       :TQ10000                QC (RANDOM)         239
            VIEW
              NESTED LOOPS                                      OUTER               239
                PX BLOCK                                        ITERATOR
                  TABLE ACCESS          TMP_SODW_BB             FULL                2
                    Filter Predicates
                      OR
                        DIFF_FLAG='D'
                        DIFF_FLAG='U'
                  TABLE ACCESS          T_NOTUNIFIED_CLIENT       BY INDEX ROWID    3
                    INDEX               AK_UQ_NOTUNIF_T_NOTUNI    RANGE SCAN        2
                      Access Predicates
                        AND
                          D.NCLIENT_KEY(+)=NCLIENT_KEY
                          D.CURRENT_RECORD(+)='Y'
                      Filter Predicates
                        D.CURRENT_RECORD(+)='Y' Yes, I know the queries are not exactly the same - but we can fix that. The point is that the TMP_SODW_BB table contains 1639 rows in total, and 284 of them match the moved 'IN' condition. Even if we remove the condition altogether (i.e. 1639 rows have to be merged), the execution plan does not change (the cost increases to about 1300, which is proportional to the number of rows).
    But with the original IN condition (that turns into an OR combination of predicates) in the MERGE ON clausule, the cost suddenly skyrockets to 990.000 and it's damn slow. It seems like a problem with cost estimation, because once we remove one of the values (so there's only one value in the IN clausule), it works fine again. So I guess it's a planner/estimator issue ...

  • Issue with BPM-Merging 2 files.

    Hi all,
    I am facing one issue with my development object...it is a BPM scenario in which i am merging 2 files using constant correlation id...JDBC to File scenario...i am using 2 JDBC adapters at source side...the scenario is getting executed without any error and i am getting the output...but the output contain only the data in 1 file...merging is not happening and only one file data is displayed in the output...could someone help me with this issue???am i missing something here??any help will be really appreciated...
    Thanks,
    Lekshmi.

    Hi all,
    As informed in my earlier post the same scenario was working with File adapters at source side.I figured out why it was working earlier.
    Since i have generated the source files for the File adpaters i have added the name space as displayed in the mapping(highlighted in bold letters).
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
       <ns0:Message1>
          <ns1:Test1_MT xmlns:ns1="http://testing.com/Details">
             <row>
             </row>
          </ns1:Test1_MT>
       </ns0:Message1>
       <ns0:Message2>
          <ns1:Test2_MT xmlns:ns1="http://testing.com/Details">
             <row>
                <VIA_NO/>
                <VSL_NM/>
                <ATA_DTTM/>
                <PATA_DTTM/>
                <ADT_INS_DTTM/>
                <ADT_UPD_DTTM/>
             </row>
          </ns1:Test2_MT>
       </ns0:Message2>
    </ns0:Messages>
    But in the case of real time scenario data is pulled from database and the input file is created through JDBC adater i am getting the source message for mapping as :
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
       <ns0:Message1>
          <ns:Test1_MT xmlns:ns="http://testing.com/Details">
             <row>
             </row>
         </ns:Test1_MT>
       </ns0:Message1>
       <ns0:Message2>
         <ns:Test2_MT xmlns:ns="http://testing.com/Details">
             <row>
                <VIA_NO/>
                <VSL_NM/>
                <ATA_DTTM/>
                <PATA_DTTM/>
                <ADT_INS_DTTM/>
                <ADT_UPD_DTTM/>
             </row>
         </ns:Test2_MT>
       </ns0:Message2>
    </ns0:Messages>
    When i tested this message in message mapping it is giving only the first file in the output.
    Any idea how to resolve this one?
    Rgds,
    Lekshmi.

  • Unattend.xml parsing/merging issue with SCVMM 2012 R2

    Hi all
    I have a problem with a new install of SCVMM 2012 R2. I have created templates for 2008 R2, 2012 and 2012 R2. I have an issue with the unattend.xml losing some of its configuration when the 2012 and 2012 R2 templates are built, this issue however doesn't
    occur on a 2008 R2 template build.
    I have generated unattend.xml files for all 3 OS's using System Image Manager in the latest AIK using the original install.wim from each OS media.
    In the unattend.xml file I have specified language settings, and a few other bits but the issue I have is when I configure autologon with local admin and password. I specify a logon count of 1, I also specify a GUIRunOnce command in SCVMM and not the answerfile.
    The problem is the resulting merged unattend.xml has a logon count of 999 and no GUIRunOnce command. I have tried different variations where I specify GUIRunOnce in the xml and not SCVMM, applying the autologon to the Template or Guest OS profile and
    all end with the resulting xml with logon count of 999 and no GUIRunOnce. If I remove the autologon part then GUIRunOnce gets parsed and works correctly.
    As mentioned this only happens with Server 2012 and 2012 R2, 2008 R2 works correctly.
    Any ideas?

    Hi Kevin
    I am struggling with the same thing during bare metal installs of Hyper-V hosts. In my case I have managed to narrow it down to the language settings in the oobe pass. If I include them, parsing of the unattend file halts (without an explicit error anywhere)
    and the host fails to join the domain. Could you try to leave out the language settings in oobe pass and see if it then completes as expected? Would be interesting to see if it actually is the same issue with a slightly different flavour :)
    EDIT: Got a bit further now with the new release of WS2012 R2. All the testing I've done has been with the 05182 build. Stumbled across KB2913316 which stated that a new build (31419) was released december 11th. Allthough the KB does not directly apply to our
    issue I thought I'd give it a go, so I went about building a new vhdx-image. Low and behold - the first test went smoothly applying all the settings in the unattend file! I'm going to continue testing to make sure the successful run wasn't just a fluke.

  • Issue with Full-Text (FTS) master merge on SQL Server 2012 SP2

    Hi,
    On my current project we have really annoying issue with master merge process that occurs after Full Population of FTS Index.
    Let me describe our process: we have continuous build that setup environment and run unit tests after each commit in our source control. For each run we create new DB (on the same SQL Server) then fill it with test data and run unit tests. Sometimes unit
    tests failed because FTS Index population cannot finish in time.
    We have constantly seeing in [sysprocesses] table lots of sessions with wait type FT_MASTER_MERGE that block our tests
    In FTS log we have following error:
    The master merge started at the end of the full crawl of table or indexed view [TABLENAME] failed with HRESULT = '0x80000049'. Database ID is '45', table id is
    706101556, catalog ID: 5.
    Here is an example of how we create FTS catalog and add table to it (As you can see it's created with auto change tracking together with
    index update in background):
    -- Create FTS catalog
    EXEC sp_fulltext_catalog 'WilcoFTSCatalog', 'create'EXEC sp_fulltext_table 'Users', 'create', 'WilcoFTSCatalog', 'PK_Users'
    EXEC sp_fulltext_column 'Users', 'UserId', 'add'
    EXEC sp_fulltext_column 'Users', 'Name', 'add'
    EXEC sp_fulltext_table 'Users', 'activate'
    EXEC sp_fulltext_table 'Users', 'start_change_tracking'
    EXEC sp_fulltext_table 'Users', 'start_background_updateindex'
    Does anybody know what is the root cause of this issue and what should be done to avoid it?
    Thank you in advance,
    Olena Smoliak

    Hi,
    On my current project we have really annoying issue with master merge process that occurs after Full Population of FTS Index.
    Let me describe our process: we have continuous build that setup environment and run unit tests after each commit in our source control. For each run we create new DB (on the same SQL Server) then fill it with test data and run unit tests. Sometimes unit
    tests failed because FTS Index population cannot finish in time.
    We have constantly seeing in [sysprocesses] table lots of sessions with wait type FT_MASTER_MERGE that block our tests
    In FTS log we have following error:
    The master merge started at the end of the full crawl of table or indexed view [TABLENAME] failed with HRESULT = '0x80000049'. Database ID is '45', table id is
    706101556, catalog ID: 5.
    Here is an example of how we create FTS catalog and add table to it (As you can see it's created with auto change tracking together with
    index update in background):
    -- Create FTS catalog
    EXEC sp_fulltext_catalog 'WilcoFTSCatalog', 'create'EXEC sp_fulltext_table 'Users', 'create', 'WilcoFTSCatalog', 'PK_Users'
    EXEC sp_fulltext_column 'Users', 'UserId', 'add'
    EXEC sp_fulltext_column 'Users', 'Name', 'add'
    EXEC sp_fulltext_table 'Users', 'activate'
    EXEC sp_fulltext_table 'Users', 'start_change_tracking'
    EXEC sp_fulltext_table 'Users', 'start_background_updateindex'
    Does anybody know what is the root cause of this issue and what should be done to avoid it?
    Thank you in advance,
    Olena Smoliak

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Issue with setting float point in Textfield

    hi
    i have an issue with float input in a textfield.
    what i want to do is.
    when the user start typing numerics it should accept from right hand side and keep appending value from right hand side.
    for ex
    if i want to enter 123.45
    user starts entering
    1 then it should display as 0.01
    2 then it should display as 0.12
    3 then it should display as 1.23
    4 then it should display as 12.34
    5 then it should display as 123.45
    to achive this i have written the code as below
    public class Test{
         public static void main(String[] a){
         try {
    UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel");
    } catch (Exception evt) {}
    DecimalFormat format = new DecimalFormat();
    format.setGroupingUsed(true);
    format.setGroupingSize(3);
    format.setParseIntegerOnly(false);
    JFrame f = new JFrame("Numeric Text Field Example");
    final DecimalFormateGen tf = new DecimalFormateGen(10, format);
    // tf.setValue((double) 123456.789);
    tf.setHorizontalAlignment(SwingConstants.RIGHT);
    JLabel lbl = new JLabel("Type a number: ");
    f.getContentPane().add(tf, "East");
    f.getContentPane().add(lbl, "West");
    tf.addKeyListener(new KeyAdapter(){
         public void keyReleased(KeyEvent ke){
              char ch = ke.getKeyChar();
              char key;
              int finalres =0;
              String str,str1 = null,str2 =null,str3 = null,str4= null;
              if(ke.getKeyChar() == KeyEvent.VK_0 || ke.getKeyChar() == KeyEvent.VK_0 ||
                        ke.getKeyChar() == KeyEvent.VK_0 || ke.getKeyChar() == KeyEvent.VK_1 || ke.getKeyChar() == KeyEvent.VK_2 ||
                             ke.getKeyChar() == KeyEvent.VK_3 || ke.getKeyChar() == KeyEvent.VK_4 || ke.getKeyChar() == KeyEvent.VK_5 ||
                             ke.getKeyChar() == KeyEvent.VK_6 || ke.getKeyChar() == KeyEvent.VK_7 || ke.getKeyChar() == KeyEvent.VK_8 ||
                             ke.getKeyChar() == KeyEvent.VK_9 ){
                   double value1 = Double.parseDouble(tf.getText());
                   int position = tf.getCaretPosition();
                   if(tf.getText().length() == 1){
                        if(tf.getText() != null || tf.getText() != ""){
                        value1 = value1 / 100;
                        tf.setText(String.valueOf(value1));
                   /*else if(tf.getText().length() == 3){
                        str = tf.getText();
                        for(int i=0;i<str.length();i++){
                             if(str.charAt(i) == '.'){
                                  str1 = str.substring(0,i);
                                  str2 = str.substring(i+1,str.length()-1);
                                  break;
                        key = ke.getKeyChar();
                        finalres = calculate.calculate1(str2,key);
                        str3 = merge.merge1(str1,finalres);
                        tf.setText(str3);
                        System.out.println(key);
                        System.out.println(str1);
                        System.out.println(str2);
                   else{
                        value1 = Float.parseFloat(tf.getText());
                        value1 = value1*10;
                        tf.setText(String.valueOf(value1));
    tf.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent evt) {
    try {
    tf.normalize();
    Long l = tf.getLongValue();
    System.out.println("Value is (Long)" + l);
    } catch (ParseException e1) {
    try {
    Double d = tf.getDoubleValue();
    System.out.println("Value is (Double)" + d);
    } catch (ParseException e2) {
    System.out.println(e2);
    f.pack();
    f.setVisible(true);
    import javax.swing.JTextField;
    * Created on May 25, 2005
    * TODO To change the template for this generated file go to
    * Window - Preferences - Java - Code Style - Code Templates
    * @author jagjeevanreddyg
    * TODO To change the template for this generated type comment go to
    * Window - Preferences - Java - Code Style - Code Templates
    import java.awt.Toolkit;
    import java.awt.event.ActionEvent;
    import java.awt.event.ActionListener;
    import java.text.DecimalFormat;
    import java.text.ParseException;
    import java.text.ParsePosition;
    import javax.swing.JFrame;
    import javax.swing.JLabel;
    import javax.swing.JTextField;
    import javax.swing.UIManager;
    import javax.swing.text.AbstractDocument;
    import javax.swing.text.AttributeSet;
    import javax.swing.text.BadLocationException;
    import javax.swing.text.Document;
    import javax.swing.text.PlainDocument;
    import javax.swing.text.AbstractDocument.Content;
    public class DecimalFormateGen extends JTextField implements
    NumericPlainDocument.InsertErrorListener {
         public DecimalFormateGen() {
         this(null, 0, null);
         public DecimalFormateGen(String text, int columns, DecimalFormat format) {
         super(null, text, columns);
         NumericPlainDocument numericDoc = (NumericPlainDocument) getDocument();
         if (format != null) {
         numericDoc.setFormat(format);
         numericDoc.addInsertErrorListener(this);
         public DecimalFormateGen(int columns, DecimalFormat format) {
         this(null, columns, format);
         public DecimalFormateGen(String text) {
         this(text, 0, null);
         public DecimalFormateGen(String text, int columns) {
         this(text, columns, null);
         public void setFormat(DecimalFormat format) {
         ((NumericPlainDocument) getDocument()).setFormat(format);
         public DecimalFormat getFormat() {
         return ((NumericPlainDocument) getDocument()).getFormat();
         public void formatChanged() {
         // Notify change of format attributes.
         setFormat(getFormat());
         // Methods to get the field value
         public Long getLongValue() throws ParseException {
         return ((NumericPlainDocument) getDocument()).getLongValue();
         public Double getDoubleValue() throws ParseException {
         return ((NumericPlainDocument) getDocument()).getDoubleValue();
         public Number getNumberValue() throws ParseException {
         return ((NumericPlainDocument) getDocument()).getNumberValue();
         // Methods to install numeric values
         public void setValue(Number number) {
         setText(getFormat().format(number));
         public void setValue(long l) {
         setText(getFormat().format(l));
         public void setValue(double d) {
         setText(getFormat().format(d));
         public void normalize() throws ParseException {
         // format the value according to the format string
         setText(getFormat().format(getNumberValue()));
         // Override to handle insertion error
         public void insertFailed(NumericPlainDocument doc, int offset, String str,
         AttributeSet a) {
         // By default, just beep
         Toolkit.getDefaultToolkit().beep();
         // Method to create default model
         protected Document createDefaultModel() {
         return new NumericPlainDocument();
         // Test code
         public static void main(String[] args) {
         try {
         UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel");
         } catch (Exception evt) {}
         DecimalFormat format = new DecimalFormat("#,###.###");
         format.setGroupingUsed(true);
         format.setGroupingSize(3);
         format.setParseIntegerOnly(false);
         JFrame f = new JFrame("Numeric Text Field Example");
         final DecimalFormateGen tf = new DecimalFormateGen(10, format);
         tf.setValue((double) 123456.789);
         JLabel lbl = new JLabel("Type a number: ");
         f.getContentPane().add(tf, "East");
         f.getContentPane().add(lbl, "West");
         tf.addActionListener(new ActionListener() {
         public void actionPerformed(ActionEvent evt) {
         try {
         tf.normalize();
         Long l = tf.getLongValue();
         System.out.println("Value is (Long)" + l);
         } catch (ParseException e1) {
         try {
         Double d = tf.getDoubleValue();
         System.out.println("Value is (Double)" + d);
         } catch (ParseException e2) {
         System.out.println(e2);
         f.pack();
         f.setVisible(true);
         class NumericPlainDocument extends PlainDocument {
         public NumericPlainDocument() {
         setFormat(null);
         public NumericPlainDocument(DecimalFormat format) {
         setFormat(format);
         public NumericPlainDocument(AbstractDocument.Content content,
         DecimalFormat format) {
         super(content);
         setFormat(format);
         try {
         format
         .parseObject(content.getString(0, content.length()), parsePos);
         } catch (Exception e) {
         throw new IllegalArgumentException(
         "Initial content not a valid number");
         if (parsePos.getIndex() != content.length() - 1) {
         throw new IllegalArgumentException(
         "Initial content not a valid number");
         public void setFormat(DecimalFormat fmt) {
         this.format = fmt != null ? fmt : (DecimalFormat) defaultFormat.clone();
         decimalSeparator = format.getDecimalFormatSymbols()
         .getDecimalSeparator();
         groupingSeparator = format.getDecimalFormatSymbols()
         .getGroupingSeparator();
         positivePrefix = format.getPositivePrefix();
         positivePrefixLen = positivePrefix.length();
         negativePrefix = format.getNegativePrefix();
         negativePrefixLen = negativePrefix.length();
         positiveSuffix = format.getPositiveSuffix();
         positiveSuffixLen = positiveSuffix.length();
         negativeSuffix = format.getNegativeSuffix();
         negativeSuffixLen = negativeSuffix.length();
         public DecimalFormat getFormat() {
         return format;
         public Number getNumberValue() throws ParseException {
         try {
         String content = getText(0, getLength());
         parsePos.setIndex(0);
         Number result = format.parse(content, parsePos);
         if (parsePos.getIndex() != getLength()) {
         throw new ParseException("Not a valid number: " + content, 0);
         return result;
         } catch (BadLocationException e) {
         throw new ParseException("Not a valid number", 0);
         public Long getLongValue() throws ParseException {
         Number result = getNumberValue();
         if ((result instanceof Long) == false) {
         throw new ParseException("Not a valid long", 0);
         return (Long) result;
         public Double getDoubleValue() throws ParseException {
         Number result = getNumberValue();
         if ((result instanceof Long) == false
         && (result instanceof Double) == false) {
         throw new ParseException("Not a valid double", 0);
         if (result instanceof Long) {
         result = new Double(result.doubleValue());
         return (Double) result;
         public void insertString(int offset, String str, AttributeSet a)
         throws BadLocationException {
         if (str == null || str.length() == 0) {
         return;
         Content content = getContent();
         int length = content.length();
         int originalLength = length;
         parsePos.setIndex(0);
         // Create the result of inserting the new data,
         // but ignore the trailing newline
         String targetString = content.getString(0, offset) + str
         + content.getString(offset, length - offset - 1);
         // Parse the input string and check for errors
         do {
         boolean gotPositive = targetString.startsWith(positivePrefix);
         boolean gotNegative = targetString.startsWith(negativePrefix);
         length = targetString.length();
         // If we have a valid prefix, the parse fails if the
         // suffix is not present and the error is reported
         // at index 0. So, we need to add the appropriate
         // suffix if it is not present at this point.
         if (gotPositive == true || gotNegative == true) {
         String suffix;
         int suffixLength;
         int prefixLength;
         if (gotPositive == true && gotNegative == true) {
         // This happens if one is the leading part of
         // the other - e.g. if one is "(" and the other "(("
         if (positivePrefixLen > negativePrefixLen) {
         gotNegative = false;
         } else {
         gotPositive = false;
         if (gotPositive == true) {
         suffix = positiveSuffix;
         suffixLength = positiveSuffixLen;
         prefixLength = positivePrefixLen;
         } else {
         // Must have the negative prefix
         suffix = negativeSuffix;
         suffixLength = negativeSuffixLen;
         prefixLength = negativePrefixLen;
         // If the string consists of the prefix alone,
         // do nothing, or the result won't parse.
         if (length == prefixLength) {
         break;
         // We can't just add the suffix, because part of it
         // may already be there. For example, suppose the
         // negative prefix is "(" and the negative suffix is
         // "$)". If the user has typed "(345$", then it is not
         // correct to add "$)". Instead, only the missing part
         // should be added, in this case ")".
         if (targetString.endsWith(suffix) == false) {
         int i;
         for (i = suffixLength - 1; i > 0; i--) {
         if (targetString
         .regionMatches(length - i, suffix, 0, i)) {
         targetString += suffix.substring(i);
         break;
         if (i == 0) {
         // None of the suffix was present
         targetString += suffix;
         length = targetString.length();
         format.parse(targetString, parsePos);
         int endIndex = parsePos.getIndex();
         if (endIndex == length) {
         break; // Number is acceptable
         // Parse ended early
         // Since incomplete numbers don't always parse, try
         // to work out what went wrong.
         // First check for an incomplete positive prefix
         if (positivePrefixLen > 0 && endIndex < positivePrefixLen
         && length <= positivePrefixLen
         && targetString.regionMatches(0, positivePrefix, 0, length)) {
         break; // Accept for now
         // Next check for an incomplete negative prefix
         if (negativePrefixLen > 0 && endIndex < negativePrefixLen
         && length <= negativePrefixLen
         && targetString.regionMatches(0, negativePrefix, 0, length)) {
         break; // Accept for now
         // Allow a number that ends with the group
         // or decimal separator, if these are in use
         char lastChar = targetString.charAt(originalLength - 1);
         int decimalIndex = targetString.indexOf(decimalSeparator);
         if (format.isGroupingUsed() && lastChar == groupingSeparator
         && decimalIndex == -1) {
         // Allow a "," but only in integer part
         break;
         if (format.isParseIntegerOnly() == false
         && lastChar == decimalSeparator
         && decimalIndex == originalLength - 1) {
         // Allow a ".", but only one
         break;
         // No more corrections to make: must be an error
         if (errorListener != null) {
         errorListener.insertFailed(this, offset, str, a);
         return;
         } while (true == false);
         // Finally, add to the model
         super.insertString(offset, str, a);
         public void addInsertErrorListener(InsertErrorListener l) {
         if (errorListener == null) {
         errorListener = l;
         return;
         throw new IllegalArgumentException(
         "InsertErrorListener already registered");
         public void removeInsertErrorListener(InsertErrorListener l) {
         if (errorListener == l) {
         errorListener = null;
         public interface InsertErrorListener {
         public abstract void insertFailed(NumericPlainDocument doc, int offset,
         String str, AttributeSet a);
         protected InsertErrorListener errorListener;
         protected DecimalFormat format;
         protected char decimalSeparator;
         protected char groupingSeparator;
         protected String positivePrefix;
         protected String negativePrefix;
         protected int positivePrefixLen;
         protected int negativePrefixLen;
         protected String positiveSuffix;
         protected String negativeSuffix;
         protected int positiveSuffixLen;
         protected int negativeSuffixLen;
         protected ParsePosition parsePos = new ParsePosition(0);
         protected static DecimalFormat defaultFormat = new DecimalFormat();
    this is not working as desired pls help me.
    can we use this code and get the desired result or is there any other way to do this.
    it is very urgent for me pls help immediately
    thanks in advance

    Hi camickr
    i learned how to format the code now, and u also responded for my testarea problem , iam very much thankful to u, and now i repeat the same problem what i have with a text field.
    actually i have window with a textfield on it and while end user starts entering data in it , it should be have as follows
    when the user start typing numerics it should accept from right hand side and keep appending value from right hand side.
    first the default value should be as 0.00 and as the user starts entering
    then it is as follows
    for ex
    if i want to enter 123.45
    user starts entering
    1 then it should display as 0.01
    2 then it should display as 0.12
    3 then it should display as 1.23
    4 then it should display as 12.34
    5 then it should display as 123.45
    i hope u will give me quick reply because this is very hard time for me.

  • Multiple Issues with HTC 8X and Verizon Doesn't Care

    I have had this phone for three months and had this issue the whole time.  Searched Verizon and Windows forums and found the same thing.  Only Verizon HTC 8X having this issue.  On top of that I am now having hardware issues.  Phone SIM not showing up as connected and have to restart or re-seat the SIM.  Battery life has gone down drastically.  Overnight charge gets me 11 hours tops without using my Data plan with WIFI off and other major apps closed out.  Shouldn't have to do that to utilize features on a smart phone.  As soon as I begin using my data plan the 11 hours is cut in half.  Not to mention if I use the phone for GPS navigation it gets drained in 30 minutes to and hour.  Then the software is now freezing.  Either freezes completly and have to restart or applications i try to open, stall and then go  back to the start menu.  I have to then power off the phone which freezes and 15 minutes later powers off.  3 months into the phones contract.  Ridiculous.  Bought from Letstalk.com and they told me I am SOL.  14 days for return and 30 for exchange.  Go see HTC.  HTC says i'm SOL.  Go to Verizon and get a new SIM.  Factory Reset the device as  well.  If that doesn't help, ship it to us and we will repair.  Are you kidding.  I am a consultant for a major merger and acquisition company that deals with infrastructure practices all day.  Send you my phone for repair on a 3 month old phone.  Are you kidding me.  Finally talked to Verizon.  They are not consciously aware of the HTC 8x missed call three beep scenario when it has been on there forums since December of this year.  With all my issues they told me I'm SOL.  I don't want to send it in for repair.  That is ridiculous.  What will that solve.  A couple of the issues.  What about the three beep issue the has plagued me for months. A refurbished of a different model from two years ago, no thanks.  I received a new phone three months ago.  So you wanna make it my fault and tell me here take an iphone 4 from two years ago from 300 dollars.  I am furious with this and want to proceed with making sure this lack of decent service for an issue like this is resolved and others don't have to face this type of treatment.  I paid good money for a smartphone with reliable service.  So far i'm being told to bend over and take it from everyone.  I feel that the three beep issue alone is a breach of the contract I have with Verizon and I should be able to constitute that right and terminate this disease.

    Hello,
    I received my new "reconditioned" phone to replace my 6 month old Droid 4.
    The replacement phone only holds a charge for 6 hours or so, with minimal
    use (no internet).
    I am VERY unhappy. I left a message with the store employee who helped me
    activate the replacement
    phone, but have not heard back. This is really disappointing. I paid a lot
    of money AND signed an additional
    2 year contract in order to purchase my Droid 4. Now I am stuck with a piece
    of junk phone as is my other
    employee who has a replaced Droid 4 as well. The store even told me they had
    "issues" with this model (battery completely died on my phone 619-XXX-XXXX
    as well as on 760-XXX-XXXX).
    We concluded that when both our Droid 4 phones died within a couple of
    months of each other.
    AT the very least, we should have our phones replaced with new, so they can
    at least hold a charge and be used for
    our business as the purpose they were purchased  for.
    I would think making this right, since this is quite obviously NOT our fault
    would be clear if you expect to retain customers.
    Lisa <Personal information removed for privacy per the Verizon Wireless Terms of Service.>
    Message was edited by: Verizon Moderator

  • Issues with Synchronize to and from DB (version 3.3.0.747)

    Having an issue with syncing the model and the data dictionary is a very specific case:
    1. Reverse engineer an existing table (T1)
    2. Copy the table (in data modeler) - T2
    3. Rename the table, make changes (add columns and keys)
    4. Forward engineer the new table (i.e., export and run DD for T2L)
    5. Make additional changes to new table (T2) in data model.
    6. Run sync db with model - the sync routine compares my new table (t2) to the original table (T1) in the database! It cannot "find" my new table (T2) in the database to compare to.
    7. Alternatively try to sync the model with the db - this time it wants to sync the original table (T1) to itself (which has no change) and DROP my new table (T2) from the model.
    It appears that sync is looking at meta data one my new table (T2) that contains the original copied table's name (T1). And in fact if I look at the table Summary properties - the source db object property does list the original table name (T1).
    I also tried to reverse engineer the new table (T2) to the merge with the model, that has the same behavior as the sync.
    Is there a way to fix this short of dropping T1 from the schema?
    Edited by: Kent Graziano on May 20, 2013 2:37 PM

    Hi Kent,
    thanks for reporting the problem. I logged a bug and ER for that.
    Is there a way to fix this short of dropping T1 from the schema?Unnecessary information about source need to be cleared. Here is a script that will help on that:
    //array with 2 elements to illustrate different use cases - the first is used to have exact match on table name
    // the second one ("REGIO") can be used with following check in inList function
    //if(table.getName().startsWith(list)){
    var list = ["Regionsv1","REGIO"];
    function isInList(table){
         for(var i=0;i<list.length;i++){
              // different approaches to match table name can be used
              // be aware of letter case
              //if(table.getName().startsWith(list[i])){
              if(table.getName().equalsIgnoreCase(list[i])){
                   return true;
         return false;
    function clearSourceStampforTable(table){
         clearSourceStamp(table);
         cols = table.getElements();
         for(var i=0;i<cols.length;i++){
              clearSourceStamp(cols[i]);
         keys = table.getKeySet().toArray();
         for(var i=0;i<keys.length;i++){
              clearSourceStamp(keys[i]);
              if(keys[i].isFK()){
                   clearSourceStamp(keys[i].getFKIndexAssociation());
    function clearSourceStamp(object){
         object.setSourceConnName("");
         object.setSourceObjSchema("");
         object.setSourceObjName("");
         object.setSourceDDLFile("");
         object.setDirty(true);
    tables = model.getTableSet().toArray();
    for(var i=0;i<tables.length;i++){
         table = tables[i];
         if(isInList(table)){
              clearSourceStampforTable(table);

Maybe you are looking for

  • Editing Text After Saving

    I am working in Premier Elements 9. I have a number of text slides and saved my work. I went to go back to edit some of the text and I am unable to do so. I can only add new text. Is there a way to edit text? Thanks!

  • Can't use free trial - HELP

    I am still trying to get into my website through Editor Contribute. Maybe you will know what I need to do. I downloaded Editor Contribute and it is now a toolbar when I go into the site; however when I click on it – it gives me help suggestions to th

  • Any printer UI for change printer settings?

    Since Solaris 10's printing system is powered by PPD files. (at least for some printers). I suppose I can access all features PPD file provide: Change page size, input-tray, output-tray, color-mode, etc. Is there a GUI that comes with Solaris that ca

  • Web Sphere let me call EJB through the firewall.

    Finally I found some light on my problem related to the firewall. Seems like the web Sphere gives you the option to specify the port for the application server to communicate with the client. Can I do similar stuff with Sun Application server? Or I a

  • How to install bw-bps

    hi, we have installed new bi system it is working fine,but we want bw-bps also. so i went and checked transaction RSPLAN IT IS NOT WORKING my doubt is if we have licence for bi we can access bw-bps or not? business planning will have a saperate licen