Error in sync group with large tables

Since a few days ago the automated sync process configured in windows azure portal is failing. The following message appears
SqlException Error Code: -2146232060 - SqlError Number:40550, 
Message: The session has been terminated because it has acquired too many locks. 
Looking for the error on internet I've found the following post 
http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx
Basically it says that in order to increase the application transaction size it's necessary to include some parameters in the remote and local provider. There's an example script for that.
But how can i apply this change if my data sync process was created through azure web portal? Is there a way to access the sync scripts? How can I increase the transaction size from azure portal?
Please, any help is welcome
Alvaro

Hi Alvaro,
I’m afraid that there’s no method to access the sync scripts and increase the transaction size from azure portal when using SQL Azure data sync group.
The error 40550 occurs when sessions consuming greater than one million locks. You can use the following DMVs to monitor your transactions in SQL Azure. Usually, the solution of this error is to read or modify fewer rows in a single transaction.
sys.dm_tran_active_transactions
sys.dm_tran_database_transactions
sys.dm_tran_locks
sys.dm_tran_session_transactions
In your scenario, to overcome the error 40550, I recommend you use the
bcp utility or SQL Server Integration Services (SSIS) to move data from large table to SQL Azure.
With bcp utility, you can divide your data into multiple sections and upload each section by executing multiple bcp commands simultaneously. With SSIS, you can divide your data into multiple files on the file system and upload each file by executing multiple
Streams simultaneously.
Reference:
Optimizing Data Access and Messaging - SQL Azure Connection Management
Thanks,
Lydia Zhang
If you have any feedback on our support, please click
here.
Lydia Zhang
TechNet Community Support

Similar Messages

  • Address database perf with large tables for attachments (.doc,.jpg,.pdf..etc)

    Hello Folks,
    I have a database with large  tables that contain attachments of various kinds of files such as *.doc, *.docx, *.xlsx, *.jpg, *.pdf, *, *.jpeg. Overtime this database has grown so quickly and quite a handful to maintain.  
    I've been thinking of a proper approach to redesign the database's tables' struct with a little change on the application side.
    How do you implement an idea in which the physical file attachment is not appended to these tables but instead just a path to that file which is located in another storage drive? Hence, the column's data type could be just a string rather than an image.
    This topic first appeared in the Spiceworks Community

    Hi,
    I'm sorry you are having this problem, here is another post about the same problem, where the cause of the problem is described:
    https://support.mozilla.com/en-US/questions/894442
    A bug has been filed to track resolution of the issue here, because a true fix isn't yet available:
    https://bugzilla.mozilla.org/show_bug.cgi?id=703015
    I apologize for the inconvenience.
    Regards,
    Michelle

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • Dramatic slowdown with large tables--common problem?

    I opened an excel .xls file in Numbers. The file has one sheet and on that sheet is a table with 45,000 rows and 7 columns, including a date column. I was surprised to experience response times between 30 - 60 seconds for doing tasks like adjusting column widths and formatting the date column. Sorting and filtering also took > 30 seconds, each time I changed something (sort order, filter criteria). There are no formulas in the table at this point--just data.
    I hacked off 3/4 of the rows to get the table below 10,000 rows and did the same tasks. The times were roughly proportionately shorter, but still still pretty long compared to Excel on Windows.
    I'm running a Mac Pro with 2 quad-core processors and six GB of memory, so I don't think the problem is hardware. (When I run the same tasks in Windows on the same machine the response time is instantaneous for all tasks).
    Is this a problem with Numbers that anyone with large tables will experience, or is it likely to be something wrong with my particular installation of Numbers (I'm on the most recent version of Numbers)?

    Hello
    The Terms of use which you MUST have read claims:
    +to help you resolve issues, ask questions, get tips and advice, and more.+
    +If you have a technical question about an Apple product, _be sure to check out Apple's support resources_ first by consulting the application Help menu on your computer and visiting our Support site to view articles and more on our product support pages.+
    +How do I post a question? ‚+
    +_If you searched the forums and didn't find an answer to your question or issue_, click the Post New Topic link at the top of a relevant forum page to post your own question.+
    A quick search with the keyword "speed" would have already tell you that : yes, Numbers is slow !
    Yvan KOENIG (from FRANCE lundi 25 février 2008 20:46:52)

  • OutOfMemory error when trying to display large tables

    We use JDeveloper 10.1.3. Our project uses ADF Faces + EJB3 Session Facade + TopLink.
    We have a large table (over 100K rows) which we try to show to the user via an ADF Read-only Table. We build the page by dragging the facade findAllXXX method's result onto the page and choosing "ADF Read-only Table".
    The problem is that during execution we get an OutOfMemory error. The Facade method attempts to extract the whole result set and to transfer it to a List. But the result set is simply too large. There's not enough memory.
    Initially, I was under the impression that the table iterator would be running queries that automatically fetch just a chunk of the db table data at a time. Sadly, this is not the case. Apparently, all the data gets fetched. And then the iterator simply iterates through a List in memory. This is not what we needed.
    So, I'd like to ask: is there a way for us to show a very large database table inside an ADF Table? And when the user clicks on "Next", to have the iterator automatically execute queries against the database and fetch the next chunk of data, if necessary?
    If that is not possible with ADF components, it looks like we'll have to either write our own component or simply use the old code that we have which supports paging for huge tables by simply running new queries whenever necessary. Alternatively, each time the user clicks on "Next" or "Previous", we might have to intercept the event and manually send range information to a facade method which would then fetch the appropriate data from the database. I don't know how easy or difficult that would be to implement.
    Naturally, I'd prefer to have that functionality available in ADF Faces. I hope there's a way to do this. But I'm still a novice and I would appreciate any advice.

    Hi Shay,
    We do use search pages and we do give the users the opportunity to specify search criteria.
    The trouble comes when the search criteria are not specific enough and the result set is huge. Transferring the whole result set into memory will be disastrous, especially for servers used by hundreds of users simultaneously. So, we'll have to limit the number of rows fetched at a time. We should do this either by setting the Maximum Rows option for the TopLink query (or using rownum<=XXX inside the SQL), or through using a data provider that supports paging.
    I don't like the first approach very much because I don't have a good recipe for calculating the optimum number of Maximum Rows for each query. By specifying some average number of, say, 500 rows, I risk fetching too many rows at once and I also risk filling the TopLink cache with objects that are not necessary. I can use methods like query.dontMaintainCache() but in my case this is a workaround, not a solution.
    I would prefer fetching relatively small chunks of data at a time and not limiting the user to a certain number of maximum rows. Furthermore, this way I won't fetch large amounts of data at the very beginning and I won't be forced to turn off the caching for the query.
    Regarding the "ADF Developer's Guide", I read there that "To create a table using a data control, you must bind to a method on the data control that returns a collection. JDeveloper allows you to do this declaratively by dragging and dropping a collection from the Data Control Palette."
    So, it looks like I'll have to implement a collection which, in turn, implements the paging functionality that I need. Is the TopLink object you are referring to some type of collection? I know that I can specify a collection class that TopLink should use for queries through the query.useCollectionClass(...) method. But if TopLink doesn't provide the collection I need, I will have to write that collection myself. I still haven't found the section in the TopLink documentation that says what types of Collections are natively provided by TopLink. I can see other collections like oracle.toplink.indirection.IndirectList, for example. But I have not found a specific discussion on large result sets with the exception of Streams and Cursors and I feel uneasy about maintaining cursors between client requests.
    And I completely agree with you about reading the docs first and doing the programming afterwards. Whenever time permits, I always do that. I have already read the "ADF Developer's Guide" with the exception of chapters 20 and 21. And I switched to the "TopLink Developer's Guide" because it seems that we must focus on the model. Unfortunately, because of the circumstances, I've spent a lot of time reading and not enough time practicing what I read. So, my knowledge is kind of shaky at the moment and perhaps I'm not seeing things that are obvious to you. That's why I tried using this forum -- to ask the experts for advice on the best method for implementing paging. And I'm thankful to everyone who replied to my post so far.

  • Runtime error when syncing P1i with PC

    I get a runtime error when syncing my P1i to PC. The error is linked with this file, 'dxp syncml.exe', and the application is said to request the 'Runtime to terminate it in an unusual way'. How do you fix this?

    Thanks Joanne. Let me give you feedback on whats happening so far.
    After some doing, I discovered that my firewall is controlled by Norton Firewall Provider. (Bear in mind I am a novice in terms of managing these computer protocols). To modify the firewall settings, Norton calls this in their the control panel  'Smart firewall', I go to program control then configure. A list of programs came up with the options to add, modify, remove or rename. I searched the list and found that DXP SyncML.exe and mRouterRuntime.exe were listed and set to 'Auto' in the Access column. I noticed that under access, the options include auto, allow, block and custom. I reasoned that 'allow' is the highest level of access, and to support the sync feature, I changed the status to 'allow' from 'auto' for the DXP SyncML and mRouterRuntime executable files. If you believe it is ok and safer to keep the auto status, advise me.
    I did not find in the listing any of these programs; Bearer Abstraction Layer (SCBAL.exe), Generic Device Management Executable (Generic.exe) or Symbian Connect Object Model for Symbian Connect QI (SymbianConnectRuntime.exe).I did a search and located the generic.exe and the other two files under 'common files' in 'teleca shared' folder, and 'symbian' in 'shared' and 'symbianconnectruntime', respectively.
    Norton's control panel for Smart Firewall gave me three options when adding programs: allow, block and manual configure internet settings. The last of the three was recommended by Norton. However, I selected 'allow' for the generic.exe, symbianconnectruntim.exe and scba.exe.Please advise if you believe I need to modify the terms I entered to set up this procedure (is there a better setting that allow me to achieve the same level of functionality without compromising my security?)
    Now Joanne, whats left is for me to try the sync after making these changes. I feel as if I accomplished a lot by only reaching this far. I will have to do a reinstallation of the SE PC Suite before moving on though. Let me explain.
    When the sync failed repatedly because of the unexpected termination linked with the dxp syncml.exe, I uninstalled the SE PC Suite which was done from a disc, and installed a version from the Sony website. That version is 1.6.0, with a copyright date of 2006. With this, the sync worked. By the way I am using Windows 7 Starter. Now what I am going to do later is uninstall this version of the suite, and reinstall the one from the disc that was linked with the problem we are solving. When I do that later, I will give you feedback.
    Just in case you are wondering, I have a vested interest in using the PC Suite from the disc. This application works fine for the file manager component where the drive on the phone and removable media card are read. In the 1.6.0 version, I cannot get the file manager to see the phone as connected though when I put the card (is that called M2 disc?) in the phone, it reads. So, I although sync is working with the 1.6.0 version from the website, I want to use the version I have on disc if the sync component gets over the problem we are trying to solve.
    Hope I am on the right  track so far I thank you profusely for your patience and dealing with my qeurry; the solution will make a world of a difference for me and many other P1i owners. Look foward to the follow up
    Rohan Bell

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • Got 800ccc0f-0-0-332 error when sync inbox with IMAP

    Dear all,
    I have outlook configured with IMAP, I got 800ccc0f-0-0-332 error when it sync the inbox.
    Besides, message cannot receive to inbox.
    Anyone has idea?
    Hey, I am back

    Hi Xiu,
    According to your description, I know that you configured an Exchange account with IMAP on Outlook and got error during syncing Inbox.
    I notice that the Outlook cannot receive to Inbox, how about sending emails?
    Please try to check whether the incoming server configured correctly.
    Please expand "Folder" in Outlook and check whether there are related Synchronization Logs in the "Sync Issues" folder. Detailed information without sensitive information would be helpful for the further troubleshooting.
    Please try to manually send/receive on Outlook, or try to re-create profile.
    Similar thread for your reference:
    http://answers.microsoft.com/en-us/office/forum/office_2013_release-outlook/problems-with-outlook-2013-and-imap/e7aa0cb6-2182-47eb-84e4-2a9c6f86d892
    Thank
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Mavis Huang
    TechNet Community Support

  • Error executing a query with large result set

    Dear all,
    after executing a query which uses key figures with exception aggregation the BIA-server log (TrexIndexServerAlert_....trc) displays the following messages:
    2009-09-29 10:59:00.433 e QMediator    QueryMediator.cpp(00324) : 6952; Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: bwp_v_tl_c02
    2009-09-29 10:59:00.434 e SERVER_TRACE TRexApiSearch.cpp(05162) : IndexID: bwp_v_tl_c02, QueryMediator failed executing query, reason: Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: bwp_v_tl_c02
    --> Does anyone know what this message exactly means? - I fear that the BIA-Installation is running out of physical memory, but I appreciate any other opinion.
    - Package Wise Read (SAP Note 1157582) does not solve the problem as the error message is not: "AggregateCalculator returning out of memory with hash table size..."
    - To get an impression of the data amount I had a look at table RSDDSTAT_OLAP of a query with less amount of data:
       Selected rows      : 50.000.000 (Event 9011)
       Transferred rows :   4.800.000 (Event 9010)
    It is possible to calculate the number of cells retreived from one index by multiplying the number of records from table RSDDSTAT_OLAP by the number of key figures from the query. In my example it is only one key figure so 4.800.000 are passed to the analytical engine.
    --> Is there a possibility to find this figure in some kind of statistic table? This would be helpful for complex queries.
    I am looking forward to your replies,
    Best regards
    Bjoern

    Hi Björn,
    I recommend you to upgrade to rev 52 or 53. Rev. 49 is really stable but there are some bugs in it and if you use BW SP >= 17 you can use some features.
    Please refer to this thread , you shouldn't´t use more than 50% (the other 50% are for reporting) of your memory, therefor I have stolen this quote from Vitaliy:
    The idea is that data all together for all of your BIA Indexes does not consume more then 50% of total RAM of active blades (page memory cannot be counted!).
    The simpliest test is during a period when no one is using the accelerator, remove all indexes (including temporary) from main memory of the BWA, and then load all indexes for all InfoCubes into ain memory. Then check your RAM utilization.
    Regards,
    -Vitaliy
    Regards,
    Jens

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • [JHS 10.1.3.0.59] Error when generating Group with Display fields

    Do you already know this error in JAG when generating groups that contain items with display type 'display field'? Or am I doing anything wrong?
    [LovOrganizationTable.jspx, default/item/table/tableDisplayField.vm] Velocity log [error] ResourceManager : unable to find resource 'default/item/table/tableDisplayField.vm' in any resource loader.
    Toine

    Toine,
    Thanks for reporting, we should remove displayField from the list of dislay types. If you want a read-only field in 10.1.3, you can choose the proper display type, and then set "Insert Allowed" and "Update Allowed" properties to false.
    Steven Davelaar,
    JHeadstart Team.

  • Error when syncing groups

    I've got a test install of Vibe, and when I try to do an LDAP sync, with the option to automatically create group profiles, I get an error in catalina.out:
    2014-01-21 21:07:27,272 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherP ortlet] - Handler execution resulted in exception - forwarding to resolved error view
    java.lang.ArrayStoreException
    And only 10 groups are created.
    After some prodding from other forum posts, I checked the Java heap settings, and the Xmx setting is 5GB; There are 8 GB assigned to the VM. I have about 3500 user accounts (they seem to be importing fine) and about 100 groups. The DB is MS SQL 2008 R2.
    The whole error is:
    Code:
    2014-01-21 21:07:27,151 INFO [http-8080-1] [org.kablink.teaming.module.ldap.impl.LdapModuleImpl] - Searching for groups in base dn: ou=jc,ou=domain,o=jccsd
    2014-01-21 21:07:27,272 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    java.lang.ArrayStoreException
    at java.util.ArrayList.addAll(ArrayList.java:250)
    at org.kablink.teaming.dao.impl.CoreDaoImpl$10.doInHibernate(CoreDaoImpl.java:773)
    at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:406)
    at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:339)
    at org.kablink.teaming.dao.impl.CoreDaoImpl.loadObjects(CoreDaoImpl.java:752)
    at org.kablink.teaming.module.ldap.impl.LdapModuleImpl.updateGroup(LdapModuleImpl.java:3143)
    at org.kablink.teaming.module.ldap.impl.LdapModuleImpl$GroupCoordinator.record(LdapModuleImpl.java:2432)
    at org.kablink.teaming.module.ldap.impl.LdapModuleImpl.syncGroups(LdapModuleImpl.java:2653)
    at org.kablink.teaming.module.ldap.impl.LdapModuleImpl.syncAll(LdapModuleImpl.java:1323)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    at java.lang.reflect.Method.invoke(Method.java:611)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
    at org.kablink.teaming.search.interceptor.IndexSynchronizationManagerInterceptor.invoke(IndexSynchronizationManagerInterceptor.java:67)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
    at org.kablink.teaming.module.interceptor.EventListenerManagerInterceptor.invoke(EventListenerManagerInterceptor.java:64)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
    at org.kablink.teaming.util.aopalliance.InvocationStatisticsInterceptor.invoke(InvocationStatisticsInterceptor.java:49)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
    at com.sun.proxy.$Proxy17.syncAll(Unknown Source)
    at org.kablink.teaming.module.ldap.LdapSyncThread.doLdapSync(LdapSyncThread.java:185)
    at org.kablink.teaming.portlet.forum.AjaxController.ajaxStartLdapSync(AjaxController.java:1073)
    at org.kablink.teaming.portlet.forum.AjaxController.handleRenderRequestAfterValidation(AjaxController.java:532)
    at org.kablink.teaming.web.portlet.SAbstractController.handleRenderRequestInternal(SAbstractController.java:286)
    at org.springframework.web.portlet.mvc.AbstractController.handleRenderRequest(AbstractController.java:222)
    at org.springframework.web.portlet.mvc.SimpleControllerHandlerAdapter.handleRender(SimpleControllerHandlerAdapter.java:70)
    at org.springframework.web.portlet.DispatcherPortlet.doRenderService(DispatcherPortlet.java:734)
    at org.springframework.web.portlet.FrameworkPortlet.processRequest(FrameworkPortlet.java:522)
    at org.springframework.web.portlet.FrameworkPortlet.doDispatch(FrameworkPortlet.java:470)
    at javax.portlet.GenericPortlet.render(GenericPortlet.java:233)
    at org.kablink.teaming.portletadapter.servlet.PortletAdapterController.handleRequestInternal(PortletAdapterController.java:131)
    at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:153)
    at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:790)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644)
    at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
    at org.kablink.teaming.portletadapter.servlet.PortletAdapterServlet.service(PortletAdapterServlet.java:74)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.kablink.teaming.web.servlet.filter.LoginFilter.doFilter(LoginFilter.java:155)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.kablink.teaming.web.servlet.filter.DefaultCharacterEncodingFilter.doFilter(DefaultCharacterEncodingFilter.java:60)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:378)
    at org.springframework.security.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109)
    at org.springframework.security.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.ui.SessionFixationProtectionFilter.doFilterHttp(SessionFixationProtectionFilter.java:67)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.ui.ExceptionTranslationFilter.doFilterHttp(ExceptionTranslationFilter.java:101)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.providers.anonymous.AnonymousProcessingFilter.doFilterHttp(AnonymousProcessingFilter.java:105)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.wrapper.SecurityContextHolderAwareRequestFilter.doFilterHttp(SecurityContextHolderAwareRequestFilter.java:91)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.ui.AbstractProcessingFilter.doFilterHttp(AbstractProcessingFilter.java:278)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.ui.basicauth.BasicProcessingFilter.doFilterHttp(BasicProcessingFilter.java:174)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.kablink.teaming.spring.security.LogoutFilter.doFilterHttp(LogoutFilter.java:73)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235)
    at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
    at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390)
    at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:175)
    at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
    at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.kablink.teaming.asmodule.servlet.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:127)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    at org.kablink.teaming.tomcat.valve.ZoneContextValve.invoke(ZoneContextValve.java:69)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:647)
    at org.apache.catalina.valves.RequestFilterValve.process(RequestFilterValve.java:316)
    at org.apache.catalina.valves.RemoteAddrValve.invoke(RemoteAddrValve.java:81)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
    at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602)
    at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
    at java.lang.Thread.run(Thread.java:761)

    Are you using Vibe 3.4 FTF1? Afaik there is a bug in FTF1 that might cause this error.
    We saw a few installations with a similar issue.
    I'll keep you up to date if there is fix for that or we find out something.
    HTH
    Erik

  • Ora-00600 error when registering xsd with large enumeration

    I am trying to register the xsd:
    http://212.130.77.78/StandatWS/StandatXSD.asmx/GetSchema?SchemaFile=std00019.xsd
    It contains a very large enumerated type like this:
    <xsd:simpleType name="std00019">
    <xsd:annotation>
    <xsd:documentation>Stofparametre</xsd:documentation>
    </xsd:annotation>
    <xsd:restriction base="xsd:string">
    <xsd:enumeration value="0000">
    <xsd:annotation>
    <xsd:documentation></xsd:documentation>
    </xsd:annotation>
    </xsd:enumeration>
    <xsd:enumeration value="0001">
    <xsd:annotation>
    <xsd:documentation>Syn</xsd:documentation>
    </xsd:annotation>
    </xsd:enumeration>
    The file is more than 7000 lines of xml. If I try to cut it down to less than 4000 lines or so, then I can register it. Otherwise it give me an error like this:
    1 begin
    2 dbms_xmlschema.registerURI(
    3 'http://212.130.77.78/StandatWS/StandatXSD.asmx/GetSchema?SchemaFile=std00019.xsd',
    4 'http://212.130.77.78/StandatWS/StandatXSD.asmx/GetSchema?SchemaFile=std00019.xsd'
    5 );
    6* end;
    SQL> /
    begin
    FEJL i linie 1:
    ORA-00600: intern fejlkode, argumenter: [qmxiCreateColl2], [11], [], [], [],
    ORA-06512: ved "XDB.DBMS_XMLSCHEMA_INT", linje 0
    ORA-06512: ved "XDB.DBMS_XMLSCHEMA", linje 185
    ORA-06512: ved linje 2
    I am using 9.2.0.6.0
    Do you know if there is a maximum length of an enumeration in oracle?

    If you cannot post the schema please open a itar with oralce support so that the schema can be uploaded.

  • Syncing Groups with Outlook 2010 and Iphone 5

    I am having trouble with my groups in my contacts in Outlook 2010 transferring to my Iphone 5.  I have resynced my history, but that has not helped my problem.  I have several groups that I email on a regular basis for work related issues.  These groups have up to 125 people.  The people in the groups change, and so I will move them to an inactive group on my Outlook 2010, but they will not move from the current group on my Iphone.  HELP!

    What app are you syncing the tasks to? Are you using an Exchange account?

Maybe you are looking for

  • Problem adding new field to free goods Determination

    Hi everyone. We need to add a new field for Free Goods determination. According to the documentation in SPRO, the new field should be added in SE11 KOMK under include KOMKZ for free goods. However there is no KOMKZ include in KOMK. How is this suppos

  • Promotion: points collection and free goods

    Dear experts, our Sales Dept. is going to propose special promotions based on points collection. For example, each time a customer orders 100 PCS he is awarded with 1 point. At the end of the promotion validity period, he gets the right to receive a

  • Itunes cannot open, missing msvcr80.dll, Help??

    went to go download the new itunes today and even during the download it gives me a hard time in regards to "Apple Mobile Device" but even if it goes through it givves me a error saying missing MSVCR80.DLL CDoes anyone know a solution or has apple re

  • New 80GB ipod - red cross with support website

    Hi I was connecting my new ipod (only 3 weeks old) to my macbook leopard when suddenly a red cross symbol came up on the screen with the address of the mac support website at the bottom. I went to the website and it suggested I put it in disk mode. T

  • IMPORTANT. copying files in java

    i would like to know how do i copy a file form one dir to another or do i have to just read from one file and write it to the other?? thanks for all the help