Data Anomalies on DS 5.2 - large allocation for small # of entries
I have 2 DS 5.2 systems running on Red Hat ES 3.
These systems have only 1.5 million entries , yet are using
over 16G of storage . Recently we had 200k were of updates,
and added almost 2G worth of storage. Read performance is excellent,
adds / updates extremely poor , taking almost 1 second/ add using
java. The actual data per entry is small , less than 200 bytes.
I am not responsible for the organization of the data, but suspect -
from experience Oracle and Sybase - that the data is not organized
correctly, or we need to do data maintenance, such as dropping
and recreating indexes.
Where can I find heuristics or documentation on these issues ? I know the devil is the details, but the current system does not feel 'right'.
Any help gratefully accepted,
JYard
UCLA
If you have too many unneeded indexes, especially substring indexes, you will use a lot of extra disk space. Also, maintaining these indexes will impact write performance for sure. You should make sure that you have all the indexes you need, but ONLY the indexes you need.
Eric
Similar Messages
-
4.2.3/.4 Data load wizard - slow when loading large files
Hi,
I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
While loading data, these configuration leads to the execution of the following query for each row:
select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
which can be found in the v$sql view while loading.
It makes the loading process slow, because of the upper function no index can be used.
It seems that the setting of "case sensitive" is not evaluated.
Dropping the numeric index for the primary key and using a function based index does not help.
Explain plan shows an implicit "to_char" conversion:
UPPER(TO_CHAR(PK)=UPPER(:UK_1)
This is missing in the query but maybe it is necessary for the function based index to work.
Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
Best regards
KlausNevertheless, a bulk loading process is what I really like to have as part of the wizard.
If all of the CSV files are identical:
use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
create a VIEW on the collection (makes it easier elsewhere)
create a procedure (in a Package) to bulk process it.
The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
MK -
I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build. The same call works fine when running on the device (iPhone) using debug build. When running with a release build, the result handler is never called (nor is the fault handler called). Viewing the BlazeDS logs in debug mode, the call is received and send back with data. I've narrowed it down to what seems to be a data size issue.
I have targeted one specific data call that returns in the String value a string length of 44kb, which fails in the release build (result or fault handler never called), but the result handler is called as expected in debug build. When I do not populate the String value (in server side Java code) on the object (just set it empty string), the result handler is then called, and the object is returned (release build).
The custom object being returned in the call is a very a simple object, with getters/setters for simple types boolean, int, String, and one org.23c.dom.Document type. This same object type is used on other other RemoteObject calls (different data) and works fine (release and debug builds). I originally was returning as a Document, but, just to make sure this wasn't the problem, changed the value to be returned to a String, just to rule out XML/Dom issues in serialization.
I don't understand 1) why the release build vs. debug build behavior is different for a RemoteObject call, 2) why the calls work in debug build when sending over a somewhat large (but, not unreasonable) amount of data in a String object, but not in release build.
I have't tried to find out exactly where the failure point in size is, but, not sure that's even relevant, since 44kb isn't an unreasonable size to expect.
By turning on the Debug mode in BlazeDS, I can see the object and it's attributes being serialized and everything looks good there. The calls are received and processed appropriately in BlazeDS for both debug and release build testing.
Anyone have an idea on other things to try to debug/resolve this?
Platform testing is BlazeDS 4, Flashbuilder 4.7, Websphere 8 server, iPhone (iOS 7.1.2). Tried using multiple Flex SDK's 4.12 to the latest 4.13, with no change in behavior.
Thanks!After a week's worth of debugging, I found the issue.
The Java type returned from the call was defined as ArrayList. Changing it to List resolved the problem.
I'm not sure why ArrayList isn't a valid return type, I've been looking at the Adobe docs, and still can't see why this isn't valid. And, why it works in Debug mode and not in Release build is even stranger. Maybe someone can shed some light on the logic here to me. -
Error in actual template allocation for Business Process
Gurus,
I am facing a strange problem in actual template allocation for business process (T-code: CPAS). When I am doing a test run, the system gives the result with a message "Processing completed with no errors", However when I have unchecked the test run and ran the allocation, system shows me the result, however no posting takes place. The message I am getting is:
"Data not updated due to errors". The details of this error message are:
No information was found
Message no. GU444
Diagnosis
The system could not find the necessary information.
I am absolutely clueless and have no idea where things have gone wrong. I have maintained the activity and cost center properly. The template I am using is also ok.
Request your help on this. Any suggestion would be highly appreciated.
Thanks in advance!
Snigdho.Hi ,
With such kind of error " "Data not updated due to errors". " its very difficult so determine the exact cause of the issue .
Did your selection parameters in CPAS has a tick on detailed list . Please tick the detailed list icon and run KPAS again and let me know if you are getting more detailed error from SAP.
Regards
Sarada -
Azure + Sync Framework + Error: Value was either too large or too small for a UInt64
Hi,
We have an in-house developed syncronisation service built on the Sync Framework v2.1 which has been running well for over 2 years. It pushes data from a local SQLServer 2005 database to one hosted on Azure with some added encryption.
The service was stopped recently and when we try to re-start it, it fails with the error:
System.OverflowException: Value was either too large or too small for a UInt64.
at System.Convert.ToUInt64(Int64 value)
at System.Int64.System.IConvertible.ToUInt64(IFormatProvider provider)
at System.Convert.ToUInt64(Object value, IFormatProvider provider)
at Microsoft.Synchronization.Data.SyncUtil.ParseTimestamp(Object obj, UInt64& timestamp)
at Microsoft.Synchronization.Data.SqlServer.SqlSyncScopeHandler.GetLocalTimestamp(IDbConnection connection, IDbTransaction transaction)
I have found the Hotfix 2703853, and we are proposing to apply this to our local server, but we have found that running SELECT CONVERT(INT, @@dbts) on the local database returns 1545488692 but running the same query on the Cloud database returns -2098169504.
which indicates the issue is on the Azure side. Would applying the hotfix to our local server resolve the issue or would it need to be somehow applied to the Azure server?
Thanks in advance for any assistance!
ChrisHi,
We have now applied the Sync Framework hotfixes to our server and re-provisioned the sync service. No errors were reported and the timestamp values were all within the required range. On re-starting the service the system worked as anticipated. It has now
been running for a week and appears to be stable. No further changes were required other than installing the hotfixes and re-provisioning the scope.
Chris -
Please can you help me!
Searched the web & found plently of advice but still getting formatting display issue when viewing HTML newsletter in MS Outlook.
Fully aware on the basics regarding the multiple issues when creating HTML newsletters but this is driving me crazy.
Apparently there is a image height limit within Outlook (can't find out what this is) so I have sliced my larger images into 5/6 parts which solves the display issue in Outlook.
But the gaps between the slices are now being displayed as small blank spaces within Hotmail.
I did use <br> between each slice as without caused the fixed 600px wide containing table to expand due to the slices stacking hoz.
Also still getting small gaps (like <br> spaces) between all images in Outlook when displaying perfectly (no gaps) in a browser.
This is my newsletter displaying correctly via a browser:
http://eu.shorts.tv/site-admin/modules/mod_mail/SHORTSTV_DECEMBER_2012.htm
Using Dreamweaver 4 (do have the latest version via Adobe Creative Cloud Membership but not on this system).
Hope you can help
Many thanksMany thanks David
I also found this article which is currently sitting on my desk.
Pdf would make perfect sense or even a url link to view via a browser but these guys need it contained within the email.
Thanks again for your kind advice.
Regards
ShortsTV
Date: Fri, 30 Nov 2012 18:05:08 -0700
From: [email protected]
To: [email protected]
Subject: MSOutlook HTML newsletter issues - large gaps between large images and small gaps...
Re: MSOutlook HTML newsletter issues - large gaps between large images and small gaps...
created by David__B in Adobe Creative Cloud - View the full discussion
Hey Shortstv, Not something I know much about, searched and found thishttp://robcubbon.com/create-html-email-newsletters-outlook/ Maybe create it as a PDF attachment instead? -Dave
Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/4888255#4888255
Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4888255#4888255
To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4888255#4888255. In the Actions box on the right, click the Stop Email Notifications link.
Start a new discussion in Adobe Creative Cloud by email or at Adobe Community
For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746. -
ODT error in VS2005: Value was either too large or too small for an Int32
Using ODT's Oracle Explorer in VS2005 I connected to a 3rd party's Oracle9i database that's been around for a while. I expanded the tables node and then attempted to expand a specific table. It then displayed a popup message and never expanded the table so I could manage the columns.
The error is:
An error ocurred while expanding the node:
Value was either too large or too small for an Int32
I recreated the table, with no data, in another database (same version of oracle, different physical server) and was able to expand the table in ODT's Oracle Explorer.
I went back to the other database in Oracle Explorer and tried to expand the table and it failed with the same error message.
The only difference I can see is that the first table contains ALOT of data (gigabytes), while the other table (the duplicate I created to duplicate the error) does not have any data.
here's the definition of the table minus the actual table and field names.
FLD6 contains jpg data from a 3rd party Oracle Forms application. The jpg data is between 100K and 20MB.
CREATE TABLE myTable
FLD1 VARCHAR2(30 BYTE),
FLD2 VARCHAR2(15 BYTE),
FLD3 VARCHAR2(20 BYTE),
FLD4 VARCHAR2(20 BYTE),
FLD5 NUMBER(3),
FLD6 LONG RAW,
FLD7 VARCHAR2(80 BYTE),
FLD8 DATE,
FLD9 VARCHAR2(20 BYTE),
FLD10 VARCHAR2(20 BYTE),
FLD11 VARCHAR2(99 BYTE),
FLD12 VARCHAR2(256 BYTE)
TABLESPACE myTableSpace
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 2048M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
NOMONITORING;
This is just to let the developers know I ran into a problem. I've already gotten around the issue by using an alternative tool.Hi,
You can also use the Map TestTool to test your maps. It uses the BizTalk engine to execute the map. You can select a map that is deployed to the GAC en execute it.
You can download the sample tool with the source code here:
TestTool for BizTalk 2013
http://code.msdn.microsoft.com/Execute-BizTalk-2013-maps-e8db7f9e
TestTool for BizTalk 2010
http://code.msdn.microsoft.com/Execute-a-BizTalk-map-from-26166441
Kind regards,
Tomasso Groenendijk
Blog
| Twitter
MCTS BizTalk Server 2006, 2010
If this answers your question please mark it accordingly -
Run Allocation for integer values
Hi experts,
I am runnig an allocation for a HeadCount Account (all integer values) and I would like that the result of the allocation were also integer values, using a round instruction for example.
Could I define in Account Dimension that Signed data must be integer? or using rounding logic?
RegardsCorrect me if i am wrong:
E.g.1
*REC(EXPRESSION=int(%value%))
E.g.2
*RUNALLOCATION
*FACTOR=USING
*DIM ACCOUNTB WHAT=BR030; WHERE=<<<; USING=PR01;
*DIM TIMEB WHAT=%YEAR%.TOTAL_D; WHERE=[PARENTH1]= '%YEAR%.TOTAL'; USING=<<<;
*ENDALLOCATION
*COMMIT
How could apply Int statement in a RUNALLOCATION? -
Resource allocation for a part time employee
Dear All,
I am using MS Project 2013 Professional Edition.
For two tasks I have been given a student assistant that will be assigned to the project from August 1st. Because of his studies he will be available all days in august (100%) and 1 day a week from September to the end of the project (20% a week).
I have created him as a work resource and defined his availability in the resource property dialogue. That is 100% in August and 20% for the following months.
Image 1. "Assign Ressources" dialogue box
At the same time I have created the two task with a duration of 18 days each.
I have now allocated the student assistant to the tasks and defined the unit to be 50% for each one of the tasks. That has extended the duration of both tasks into 36 days... so far so good.
My "problem" is now that though I have defined his availability I see that
1) MS Project 2013 allocates him from before august 1st where he is not defined as available
2) If I define August 1st as the starting date for the tasks, MS Project 2013
will allocate him every day from that date (see image 2) without respecting that he is only available 1 day a week from September
1st. (20%).
Image 2. Ressource allocation for part time employee with 2 tasks.
In both situations I will get a warning saying that he is overbooked.
I would expect MS Project to handle the information given in "Assign Ressources"
and only allocate the ressource during the time range defined... 100% in August and 20% for the following months.
Any suggestions on how to solve this or what I am interpreting/doing wrong?
Thank you in advance,
CaldesHi Caldes,
MS Project will not automatically level the resource workload based on the availability you defined for this resource. it will just be used to calculate weither or not the resource is overallocated.
In your case, I would suggest to use the resource usage view displaying in the right part the work and choosing the monthly time periods for the timephased grid. Then you'll be able to manually enter the quantity of work in the appropriate cells, based on
the resource availability.
Hope this helps,
Guillaume Rouyre, MBA, MCP, MCTS | -
SELECTing from a large table vs small table
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpuser597961 wrote:
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal". -
How to find the current CPU and Memory (RAM) allocation for OMS and Reposit
Hi There,
How do I check the CPU and memory (RAM) allocation for the OMS and the Repository database? I'm following the "Oracle Enterprise Manager Grid Control Installation and Configuration Guide 10g Release 5 (10.2.0.5.0)" documentation and it says to ensure the following:
Table 3-1 CPU and Memory Allocation for Oracle Management Service
Deployment Size Host CPU/Host Physical Memory (RAM)/Host Total Recommended Space
Small (100 monitored targets) 1 1 (3 GHz) 2 GB 2 GB
***Table 3-2 CPU and Memory Allocation for Oracle Management Repository***
Deployment Size Host CPU/Host Physical Memory (RAM)/Host Total Recommended Space
Small (100 monitored targets) 1 1 (3 GHz) 2 GB 10 GB
Thanks,
JHi J,
This is the minimum requirement. However It will work fine.
Also read below article on "Oracle Enterprise Manager Grid Control Architecture for Very Large Sites"
http://www.oracle.com/technology/pub/articles/havewala-gridcontrol.html
For GRID HA solution implementation please read :
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
Regards
Rajesh -
Serving large files for download using JSF
Hello Community,
my JSF Controler Bean has a method which is called using method binding on either a commandButton or commandLink. It's purpose is to stream CSV datasets from a database to the browser, which then opens an application like ms excel, oocalc or gnumeric. Everything works fine if there is only a small number of datasets. But if there are several thousand datasets I get an OutOfMemoryError. So it seems that the data is somehow cached/buffered by the JSF Servlet. Is there a known workaround to my problem?
public void exportFile() {
FacesContext context = FacesContext.getCurrentInstance();
HttpServletResponse response =
(HttpServletResponse)context.getExternalContext().getResponse();
response.setContentType("text/csv");
response.setHeader("Content-disposition",
"attachment; filename=data.csv");
try {
* This method iterates a java.sql.ResultSet and writes the data
* to the ServletOutputStream: "Write and Forget"
writeData(response.getOutputStream()); // <<< If there are many datasets an OutOfMemoryError is produced
response.getOutputStream().flush();
response.getOutputStream().close();
context.responseComplete();
} catch (IOException e) {
e.printStackTrace();
}Thanks, AlexanderChrisse-
Ann is one of the most knowledgeable Photoshop folks here and I respect her immensely. However she comes from a long film background and is brand new to quality digital camera capture. Ann is
incorrect when she says
>I think that it is probably a toss-up between which would be worse: an interpolated rez-up or a 170 ppi print. Basically, Chrisse needs a better camera if she wants to make prints of this size let alone 11" x 14" ones.
Ann speaks from very extensive film-scan experience, but the reality is that "uprezzing" digital camera image capture is a whole different thing than uprezzing scanned film images. Those of us who do have substantial digicam experience have found a surprising ability to successfully uprez digicam image captures.
Certainly,
well-shot pix from your G9 will normally uprez to print 8x10s just fine and probably 11x14s as well.
Do not hesitate to experiment with uprezzing digicam captures - - including very large amounts of uprez like 2x or more. Test - test - test because each image and how presented/used is different.
I have found however that "well-shot," especially as regards exposure and focus, is important to allow good uprez. Also be especially careful with post-process edits because uprezzing can exacerbate editing distortion.
You need not worry about whether Ann is correct or I am - - just do it and judge the results. Do to typical viewing distances personally I use 300-360 ppi for small prints, 240-300 ppi for 8x10 and 240 ppi for 11x14; 180 ppi for large posters. But like others have said, if the ppi comes reasonably close at your chosen image size do not force a recalculation to reach precisely 240 ppi or whatever, just leave the resample box unchecked. -
Animated Gif with large base image & small animated part.
Hello guys
I'm not really sure how to explain that, due to my limited English comprehansion. I will try with images.
I can easily create animated gif out of multiple layers, given each layer is identical, with small changes accuring over time.
But I yet to figure out an animated gif, that uses one large image for the base, and only small part of it is animated.
I always get the animated part working, without the large base applying across all the frames. All it does, is flashes once
the frame is reached and then moves on to being transparent, showing only the small animated part.
For example, this is a GIF made with Galaxy S4, of my friend playing with his phone, imported into PS CS6. On the Galaxy,
after I record the GIF, I can use my finder to touch, mask and freez parts I don't want to move, and leave only small, animated bit.
When I import it to PS, it shows one layer with full image, and bunch of frames with the animation part only.
http://i.imgur.com/UAiopQA.jpg
http://i.imgur.com/7XOGGV6.jpg
Problem is, once the image is open with PS, I'm not able to export it to keep it working in the same manner. Given the Samsung's
gifs are 8 to 10mb, it's hard to edit it, to make it more size friendly.
The gif clearly works the way I describe, so there is a setting or method I don't know about.
If PS is not best tool for editing GIF, what other apps would you recommand I use for it?
Thank you for the taking the time to read
best regards
MonicaThis has been an ongoing issue for me since I switched from Powerpoint to Keynote. Most of the animated gifs with transparent backgrounds that I used with Powerpoint are no longer transparent in Keynote. You may want to search for those earlier threads on this topic...
To summarize: I've had to open up my animated gifs in After Effects and use the Color Key effect to restore transparency, with mixed success.
Good luck! -
Training and event management - Internal Activity Allocation for Attendees(PV18)
Dear all,
I run the PV18 transaction (Internal Activity Allocation for Attendees). Lets imagine that I successfully allocated 1000 EUR in the CO module. Then I realize that it was a mistake, and I want to correct this by sending -1000 EUR to CO.
My question is: how can I do that? Is PV18 capable of doing this? Or this has to be corrected manually in CO module?
Thanks
Nándor>>I tried to create it and the system generated a message about using the allowed name space.
if it is a warning message, I think, you can go ahead & create the HRTEM entry in TTYP table.. you can check if this will let you get past the Activity Allocation.
~Suresh -
Hi ,
using the user_segment and especially the blocks column , i get the following results (in scott schema):
SQL> select segment_name , blocks from user_segments where segment_name in ('DEPT','EMP');
SEGMENT_NAME BLOCKS
DEPT 8
EMP 8Using the user_tables and the blocks column , i get the following results :
SQL> SELECT TABLE_NAME , BLOCKS FROM USER_TABLES WHERE TABLE_NAME IN ('DEPT','EMP');
TABLE_NAME BLOCKS
DEPT 5
EMP 5So , is it correct to say that the space allocation for each of these tables is 5/8 =62,5% ...?????
Thanks , a lot
SimonThe difference of those is empty_blocks, isn't it ?That is not the case :
SQL> select segment_name , blocks from user_segments where segment_name in ('DEPT','EMP');
SEGMENT_NAME BLOCKS
DEPT 8
EMP 8
SQL> SELECT TABLE_NAME , BLOCKS , EMPTY_BLOCKS FROM USER_TABLES WHERE TABLE_NAME IN ('DEPT','EMP');
TABLE_NAME BLOCKS EMPTY_BLOCKS
DEPT 5 0
EMP 5 0
SQL> Nicolas.
Maybe you are looking for
-
Adobe Flash media live encoder 3 Audio problems
hello, I have a problem i can not figure out how to fix... When i use the program everything with video works like a charm but i can't for my life figure out how i broadcast audio from my speaker sound and not my mic. In my device list i have microph
-
I made a mistake and downloaded the Windows version of Elements 11.
Is is possible for someone at Adobe to switch my order so I can download a version compatible with OSX 10.6.8? I just hurried through the process too fast.
-
Convert Safari History lastvisiteddate to date time
I want to Convert Safari History lastvisiteddate to date time.
-
X6 - new FMW 31.0.004
I see that it is avaliable, but I can't find any change log, pretty wierd !?! Did anyone tested? Nokia why you didn't put some change log? I only found on one forum that one guy had some strange problems with a noisy in microphone after update, and t
-
I'm running FireFox 10 and I have Windows 7 Ultimate on my PC. I would like to have the ability to select an open tab from the Windows taskbar. For what ever reason, this option is not showing in the Tools - Options - Tabs window. So, I can't select