Very fast growing STDERR# File
Hi experts,
I have stderr# files on two app-servers, which are growing very fast.
Problem is, I can't open the files via ST11 as they are to big.
Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
May it be a locking-log?
As I have a few entries in SM21 about failed locking.
I also can find entries about "call recv failed" and "comm error, cpic return code 020".
Thx in advance
Dear Christian,
Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
Some causes of 'stderr4' growth:
In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
Please review carefully following Notes :
48400 : Reorganization of TemSe and Spool
(here delete old 'temse' objects)
RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
RSPO1043 and RSTS0020 for the consistency check.
1140307 : STDERR1 or STDERR3 becomes unusually large
Please also run a Consistency Check of DB Tables as follows:
1. Run Transaction SM65
2. Select Goto ... Additional tests
3. Select "Consistency check DB Tables" and click execute.
4. Once you get the results check to see if you have any inconsistencies
in any of your tables.
5. If there are any inconsistencies reported then run the "Background
Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
This time check both the "Consistency check DB Tables" and the
"Remove Inconsistencies" option.
6. Run this a couple of times until all inconsistencies are removed from
the tables.
Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes. This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
For more information about stderr please refer to the following note:
12715: Collective note: problems with SCSA
(the Note contains the information about what is in the stderr and how it created).
Regards,
Abhishek
Similar Messages
-
Database data file growing very fast
Hi
I have a database that runs on SQL server 2000.
A few months back, the database was shifted to new server because the old server was crash.
There was no issue in old server which was used more than 10 years.
I noticed that the data file was growing very fast since the database was shifted to new server.
When I run "sp_spaceused", a lot of space are unused. Below is the result:
database size = 50950.81 MB
unallocated space = 14.44 MB
reserved = 52048960 KB
data = 9502168 KB
index size = 85408 KB
unused = 42461384 KB
When I run "sp_spacedused" only for one big table, the result is:
reserved = 19115904 KB
data = 4241992 KB
index size = 104 KB
unused = 14873808 KB
I had shrink the database and the size didn't reduce.
May I know how to reduce the size? Thanks.Hallo Thu,
can you check whether you have active Jobs in Microsoft SQL Server Agent which may...
rebuild Indexes?
run maintenance Jobs of your application?
I'm quite confident that index maintenance will cause the "growth".
Shrinking the database is...
useless and
nonsence
if you have index maintenance Tasks. Shrinking the database means the move of data pages from the very end of the database to the first free part in the database file(s). This will cause index fragmentation.
If the nightly index maintenance Job will rebuild the Indexes it uses NEW space in the database for the allocation of space for the data pages!
Read the blog post from Paul Randal about it here:
http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/
MCM - SQL Server 2008
MCSE - SQL Server 2012
db Berater GmbH
SQL Server Blog (german only) -
Hi all
I have a strange behaviour of my Mac MIni 2012. The mini is connected to the internet by LAN. When I download a file via http or ftp everything is very fast (arround 95mbit/s). But when I try to open a website in Safari, Chrome or Firefox i have to wait some seconds until the page starts rendering.after the page is rendered completely, everything works fast and smooth.
Do you have any ideas where the problem migth be? I have reinstalled Mavericks from zero yesterday. The problem resists. With my Macbook Air (conntected to the same router) and my Windows computer I dont have this problem.
THanks in advance for your help.
Greets
MarcBy any chance, do you run sophos? I had the same issue as you and fixed it on my mac mini and macbook pro by doing the following:
Sophos --> Open Preferences --> Web Protection --> General
You will need to enter your admin password after clicking the padlock in the bottom left to allow you to change settings.
From there, deactivate "Block access to malicious websites using realtime URL reputation checks". For good measure, I also deactivated "Block malicious downloads from websites".
Hope this helps! -
WWV_FLOW_DATA growing very fast
Hi,
We have a public application and we see wwv_flow_data growing very very fast (Up to 5Gb now).
In a way, this is a good sign ;) this means that we have a lot of hits... but we are also starting to see some contention on that table.
It would be nice to be able to set a purge sessions for public (nobody) sessions and another purge sessions for connected sessions.
We have some people that have to be connected all day , so we cannot purge sessions that are younger than 10 hours.
Is there another way to limit the number of records in wwv_flow_data than using wwv_flow_cache.purge_sessions(p_purge_sess_older_then_hrs => 24); ?
Thanks
Francis Mignault
http://insum-apex.blogspot.com/
http://www.insum.caIn /f?p=4050:65 Apex report, I can see the sessions and users , anyway that I could use that to delete the records ?No, it doesn't let you select by user name.
You can login to the workspace, though, and navigate to:
Home>Administration>Manage Services>Manage Session State>Recent Sessions>Session Details
Here you can remove a session one-by-one. But that's probably too tedious.
Scott -
PSAPSR3 Tablespace is only growing very fast in PROD
Dear All,
In our Prod Server -> PSAPSR3 Tablespace is only growing very fast (Note : with 5 days i have extened 2 time PSAPSR3 table space) .
let me know the permament solution is only extending table space ? or any alternate solution to control specific table space growth ?
pls check DB02 Table space details :
PSAPSR3 219,640.00 10,010.81 95 YES 220,000.00 10,370.81 95 22 157,305 226,884 ONLINE PERMANENT
PSAPSR3700 71,120.00 3,506.75 95 YES 170,000.00 102,386.75 40 17 868 11,389 ONLINE PERMANENT
PSAPSR3USR 20.00 1.94 90 YES 10,000.00 9,981.94 0 1 38 108 ONLINE PERMANENT
PSAPTEMP 4,260.00 4,260.00 0 YES 10,000.00 10,000.00 0 1 0 0 ONLINE TEMPORARY
PSAPUNDO 10,000.00 8,391.44 16 NO 10,000.00 8,391.44 16 1 20 498 ONLINE UNDO
SYSAUX 480.00 22.88 95 YES 10,000.00 9,542.88 5 1 991 2,633 ONLINE PERMANENT
SYSTEM 880.00 5.44 99 YES 10,000.00 9,125.44 9 1 1,212 2,835 ONLINE PERMANENT
Kindly adviseDear MHO/Sunil/Eric,
still the PSAPSR3 tablespace keep on growing ,
Pls check the DB02 ,segments details .
SAPSR3 BALDAT TABLE PSAPSR3 42,622.000 268.800 853 5,455,616
SAPSR3 SYS_LOB0000072694C00007$$ LOBSEGMENT PSAPSR3 5,914.000 191.533 277 756,992
SAPSR3 CDCLS TABLE PSAPSR3 9,091.000 38.400 327 1,163,648
SAPSR3 SYS_LOB0000082646C00006$$ LOBSEGMENT PSAPSR3 1,664.000 37.067 209 212,992
SAPSR3 BALDAT~0 INDEX PSAPSR3 5,049.000 32.000 266 646,272
SAPSR3 EDI40 TABLE PSAPSR3 3,155.000 23.467 233 403,840
SAPSR3 CDCLS~0 INDEX PSAPSR3 1,965.000 19.200 214 251,520
SAPSR3 BDCP2~001 INDEX PSAPSR3 1,543.000 18.400 208 197,504
SAPSR3 BDCPS~1 INDEX PSAPSR3 4,039.000 17.067 247 516,992
SAPSR3 APQD TABLE PSAPSR3 1,671.000 17.067 210 213,888
SAPSR3 CDHDR~0 INDEX PSAPSR3 2,183.000 12.800 218 279,424
SAPSR3 CDHDR TABLE PSAPSR3 2,305.000 12.800 220 295,040
SAPSR3 BDCP2~0 INDEX PSAPSR3 1,000.000 12.533 196 128,000
SAPSR3 ZBIPRICING~0 INDEX PSAPSR3 320.000 10.600 111 40,960
SAPSR3 WRPL TABLE PSAPSR3 288.000 8.700 107 36,864
SAPSR3 FAGL_SPLINFO TABLE PSAPSR3 1,016.000 8.000 198 130,048
SAPSR3 FAGL_SPLINFO_VAL~0 INDEX PSAPSR3 736.000 8.000 163 94,208
SAPSR3 ZBIPRICING TABLE PSAPSR3 208.000 6.931 97 26,624
SAPSR3 MARC~Y INDEX PSAPSR3 176.000 5.533 93 22,528
SYS WRH$_ACTIVE_SESSION_HISTORY WRH$_ACTIVE_2349179954_18942 TABLE PARTITION SYSAUX 6.000 5.375 21 768
SAPSR3 MARC~VBM INDEX PSAPSR3 152.000 4.867 90 19,456
SAPSR3 MARC~D INDEX PSAPSR3 136.000 4.367 88 17,408
SAPSR3 FAGLFLEXA TABLE PSAPSR3 2,052.000 4.267 216 262,656
SAPSR3 RFBLG TABLE PSAPSR3 3,200.000 4.267 233 409,600
SAPSR3 BDCPS TABLE PSAPSR3 1,280.000 4.267 203 163,840
SAPSR3 BDCP~POS INDEX PSAPSR3 3,392.000 4.267 236 434,176
SAPSR3 BALHDR TABLE PSAPSR3 864.000 4.000 179 110,592
SAPSR3 FAGL_SPLINFO~0 INDEX PSAPSR3 361.000 3.767 117 46,208
SAPSR3 ACCTIT TABLE PSAPSR3 289.000 3.733 108 36,992
SAPSR3 WRPT~0 INDEX PSAPSR3 112.000 3.731 85 14,336
SAPSR3 FAGL_SPLINFO_VAL TABLE PSAPSR3 448.000 3.467 127 57,344
SAPSR3 COEJ TABLE PSAPSR3 1,089.000 3.200 201 139,392
SAPSR3 ZBISALEDATA3 TABLE PSAPSR3 176.000 3.200 93 22,528
SAPSR3 COEP~1 INDEX PSAPSR3 927.000 3.167 187 118,656
SAPSR3 GLPCP TABLE PSAPSR3 891.000 2.933 183 114,048
SAPSR3 ZBISALEDATA TABLE PSAPSR3 376.000 2.933 118 48,128
SAPSR3 WBBP TABLE PSAPSR3 344.000 2.933 114 44,032
SYS WRH$_ACTIVE_SESSION_HISTORY WRH$_ACTIVE_2349179954_18918 TABLE PARTITION SYSAUX 6.000 2.594 21 768
SAPSR3 FAGL_SPLINFO~1 INDEX PSAPSR3 280.000 2.400 106 35,840
SAPSR3 SE16N_CD_DATA TABLE PSAPSR3 72.000 2.333 80 9,216
SAPSR3 KONH TABLE PSAPSR3 1,373.000 2.133 207 175,744
SAPSR3 GLPCA TABLE PSAPSR3 2,437.000 2.133 222 311,936
SAPSR3 BDCP~0 INDEX PSAPSR3 1,863.000 2.133 213 238,464
SAPSR3 SYS_LOB0000161775C00013$$ LOBSEGMENT PSAPSR3700 5,210.000 2.133 266 666,880
SAPSR3 BDCPS~0 INDEX PSAPSR3 2,496.000 2.133 222 319,488
SAPSR3 D010TAB TABLE PSAPSR3700 2,176.000 2.133 217 278,528
SAPSR3 COEP TABLE PSAPSR3 2,117.000 2.133 217 270,976
SAPSR3 FAGLFLEXA~0 INDEX PSAPSR3 808.000 2.133 172 103,424
SAPSR3 BSIS TABLE PSAPSR3 1,734.000 2.133 211 221,952
SAPSR3 BSAS TABLE PSAPSR3 1,650.000 2.133 210 211,200
SAPSR3 GLPCA~3 INDEX PSAPSR3 382.000 1.867 119 48,896
SAPSR3 BKPF TABLE PSAPSR3 1,012.000 1.867 198 129,536
SAPSR3 FAGLFLEXA~3 INDEX PSAPSR3 744.000 1.867 164 95,232
SAPSR3 FAGLFLEXA~2 INDEX PSAPSR3 661.000 1.867 154 84,608
SAPSR3 WRPL~001 INDEX PSAPSR3 112.000 1.867 85 14,336
SAPSR3 WRPL~0 INDEX PSAPSR3 112.000 1.667 85 14,336
SAPSR3 PCL2 TABLE PSAPSR3 1,000.000 1.600 196 128,000
SAPSR3 GLPCA~2 INDEX PSAPSR3 345.000 1.600 115 44,160
SAPSR3 FAGL_SPLINFO~3 INDEX PSAPSR3 136.000 1.600 88 17,408
SAPSR3 MARC~WRK INDEX PSAPSR3 160.000 1.600 91 20,480
SAPSR3 MSEG TABLE PSAPSR3 136.000 1.600 88 17,408
SAPSR3 ZBISALEDATA~0 INDEX PSAPSR3 208.000 1.600 97 26,624
SAPSR3 ZBISALEDATA3~0 INDEX PSAPSR3 195.000 1.500 96 24,960
SYS WRH$_ACTIVE_SESSION_HISTORY WRH$_ACTIVE_2349179954_18894 TABLE PARTITION
Kindly suggest -
Share fails when I try to share a very large project file (6.3g) to a very fast SD chip. Works fine sharing to a hard disk. Using iMovie 10.0.5. Any ideas?
If the volume's formatted as FAT32, it can't hold files which are 4GB or larger regardless of how much free space it has.
(124768) -
Why my Illustrator CC is now very slow for opening a 2MB AI file (about 5 minutes), at the beginning (1 month ago) it was very fast (10 seconds).
Assuming that you system meets minimal requirements for CC, make sure you have enough space on your hard drive. By saying enough, I try not to cross 3/4 of its capacity but trivially saying, the more free space the better and the faster the drive the better (I've been considering an SSD drive but still running a regular HDD).
If you are on Windows, clean up the %TEMP% folder regularly and keep the disk neat using the defragmentation tool.
Click Start>Run>enter %temp% , wait for the files to load and select all, hit Shift+DEL to remove them permanently.
For the defrag tool, you can use either the defrag /? command from the command line or right-click the drive, click Properties > Tools > Defragmentation>Defragment Now
Other than that it requires deeper analysis to find performance bottleneck. -
USB file copy very fast and not completed
Hello
when I try copy big files ( about 700 MB for example ) to USB device my copy speed is very fast ( about 55 MB/S ) and Copy the file is not completed. Any one can help me to solve this problem?Linux usually doesn't copy the files all at once. You can force it to flush all
remaining data by unmounting the USB device, which will take quite a long time
if you do it immediately after issuing the copy command.
I think there is a mounting option that forces all data transfer to be done
synchronously, in case you prefer that. -
CAS Content lib growing very fast!! HELP.
Hello guys!!
The "SCCMContentLib" at CAS in my SCCM 2012 R2 was growing very fast! In 15 minutes increased 3GB!!
Anyone help me?
Thanks!!
Atenciosamente Julio AraujoIs SP0 your CAS? It looks like the package is created there. You can read more about Content Library here:
http://technet.microsoft.com/en-us/library/gg682083.aspx#BKMK_ContentLibrary and here
http://technet.microsoft.com/en-us/library/gg682083.aspx I would also like to suggest
https://social.technet.microsoft.com/Forums/en-US/de323e04-7bff-4d28-b76e-b4ab4c52cf4b/sccmcontentlib-on-cas?forum=configmanagerdeployment
Tim Nilimaa-Svärd | Blog: http://infoworks.tv | Twitter: @timnilimaa -
Very large Spotlight-V100 file
I just rebuilt my Spotlight indexes on numerous partitions.
Most are now 50-200 MB.
On my largest partition (470 GB with 335 GB used) the index is almost 1 GB.
This seems excessive.
Anyone else think this is huge (i.e. too big... bug)?
RonWas as easy as that. Thanks
For those that might need this info later...
You will need to be able to enable viewing of hidden files in finder. A quick google will help with this. Then from the GrandPerspective app you can reveal the file in finder so that you can them drag it to the trash.
I will of course check out the log files again in a week or so to see if they are growing like crazy again.
Thanks again V.K. for your very fast help. -
Lion Preview and Textedit very slow when opening files from Leopard Server
My small office (about 10 users) recently upgraded our iMac's, Mac Mini's, and Macbook Pro's. We are experiencing a big problem with Preview opening super slow (1-2 minutes) when trying to open JPG's, PDF's, etc from our 10.5.8 Leopard Server. We have deleted numerous plist files related to Preview, completely deleted Preview.app from one iMac and reinstalled it via Pacifist and still no change in performance. One of the new iMac's does open much faster than the others, but we cannot figure out why this one machine works well and the others don't.
All machines are showing the below error (dozens of times) in Console when trying to open files directly from the server...
sandboxd: ([###]) Preview(###) deny system-fsctl
We have searched the internet for a solution but have not come up with anything that works. This problem exists with Textedit as well. We did try opening a file from one Lion machine to the other and it opened very fast. We also can open files very quickly from older 10.4.11 Tiger machines directly from the server. So the best we can determine right now is Lion and Leopard Server are not playing nicely together.
Any help would be greatly appreciated.
I have pasted (one of) the console log files below for reference.
Preview(3364) deny system-fsctl
Process: Preview [3364]
Path: /Applications/Preview.app/Contents/MacOS/Preview
Load Address: 0x102c9a000
Identifier: Preview
Version: ??? (???)
Code Type: X86-64 (Native)
Parent Process: launchd [114]
Date/Time: 2012-04-24 13:28:40.470 -0700
OS Version: Mac OS X 10.7.3 (11D50d)
Report Version: 7
Backtrace:
0 libsystem_kernel.dylib 0x00007fff922a0516 fsctl + 10
1 AppleShareClientCore 0x0000000105629f4a afp_getMountURL + 112
2 afp 0x0000000104e7fe03 AFP_GetMountInfo + 161
3 NetFS 0x00007fff8a17c295 NetFSGetMountInfo + 147
4 NetFS 0x00007fff8a17e08f GetCompleteMountURL + 68
5 CoreServicesInternal 0x00007fff86ef1e51 _ZL29addVolumeInfoForURLToBookmarkPK13__CFAllocatorR19BookmarkMutableDataPK7__C FURLmjPK9__CFArrayPP9__CFError + 1535
6 CoreServicesInternal 0x00007fff86ef12fb _ZL28createBookmarkWithURLAtDepthPK13__CFAllocatorPK7__CFURLmS4_PK9__CFArrayR19 BookmarkMutableDatajbPP9__CFError + 3230
7 CoreServicesInternal 0x00007fff86ef2443 _CFURLCreateBookmarkData + 1309
8 CoreFoundation 0x00007fff8d1e3219 -[NSURL bookmarkDataWithOptions:includingResourceValuesForKeys:relativeToURL:error:] + 105
9 Foundation 0x00007fff91b72fbf -[NSURL(NSURL) encodeWithCoder:] + 239
10 Foundation 0x00007fff91b2e4ed _encodeObject + 1120
11 Preview 0x0000000102cdfb4d
12 AppKit 0x00007fff90975f0b -[NSWindow encodeRestorableStateWithCoder:] + 316
13 AppKit 0x00007fff90974a00 -[NSPersistentUIRecord generateArchive:] + 177
14 AppKit 0x00007fff90975937 recursivelyEncodeInvalidPersistentState + 525
15 AppKit 0x00007fff90973ccc -[NSPersistentUIManager flushAllChangesOptionallyWaitingUntilDone:updatingSnapshots:] + 1128
16 AppKit 0x00007fff90973836 -[NSPersistentUIManager flushPersistentStateAndClose:waitingUntilDone:] + 182
17 AppKit 0x00007fff90973714 __-[NSPersistentUIManager acquireDirtyState]_block_invoke_1 + 53
18 libdispatch.dylib 0x00007fff8a79b2b6 _dispatch_source_invoke + 635
19 libdispatch.dylib 0x00007fff8a797f77 _dispatch_queue_invoke + 71
20 libdispatch.dylib 0x00007fff8a7986f7 _dispatch_main_queue_callback_4CF + 257
21 CoreFoundation 0x00007fff8d0da06c __CFRunLoopRun + 1724
22 CoreFoundation 0x00007fff8d0d9676 CFRunLoopRunSpecific + 230
23 HIToolbox 0x00007fff86aa731f RunCurrentEventLoopInMode + 277
24 HIToolbox 0x00007fff86aae5c9 ReceiveNextEventCommon + 355
25 HIToolbox 0x00007fff86aae456 BlockUntilNextEventMatchingListInMode + 62
26 AppKit 0x00007fff90918f5d _DPSNextEvent + 659
27 AppKit 0x00007fff90918861 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 135
28 AppKit 0x00007fff9091519d -[NSApplication run] + 470
29 AppKit 0x00007fff90b93b88 NSApplicationMain + 867
30 Preview 0x0000000102c9bdb4
Binary Images:
0x102c9a000 - 0x102e7efef com.apple.Preview (5.5.1 - 719.16) <EE12E506-F88C-319F-A2B4-5EF997884F0C> /Applications/Preview.app/Contents/MacOS/Preview
0x104e7f000 - 0x104e86fff com.apple.URLMount.AFPPlugin (4.0 - 4.0) <91C71C5D-562D-37C4-9131-6E6F086288DE> /System/Library/Filesystems/NetFSPlugins/afp.bundle/Contents/MacOS/afp
0x105618000 - 0x105664ff7 com.apple.AppleShareClientCore (2.5 - 2.5) <CC62F28C-398E-35E2-B2C0-B85A02E57247> /System/Library/Frameworks/AppleShareClientCore.framework/Versions/A/AppleShare ClientCore
0x7fff86aa5000 - 0x7fff86dcfff7 com.apple.HIToolbox (1.8 - ???) <D6A0D513-4893-35B4-9FFE-865FF419F2C2> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.fra mework/Versions/A/HIToolbox
0x7fff86eec000 - 0x7fff86f17ff7 com.apple.CoreServicesInternal (113.12 - 113.12) <C37DAC1A-35D2-30EC-9112-5EEECED5C461> /System/Library/PrivateFrameworks/CoreServicesInternal.framework/Versions/A/Cor eServicesInternal
0x7fff8a17a000 - 0x7fff8a181fff com.apple.NetFS (4.0 - 4.0) <433EEE54-E383-3505-9154-45B909FD3AF0> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS
0x7fff8a795000 - 0x7fff8a7a3fff libdispatch.dylib (187.7.0 - compatibility 1.0.0) <712AAEAC-AD90-37F7-B71F-293FF8AE8723> /usr/lib/system/libdispatch.dylib
0x7fff8d0a1000 - 0x7fff8d275fff com.apple.CoreFoundation (6.7.1 - 635.19) <57B77925-9065-38C9-A05B-02F4F9ED007C> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7fff90910000 - 0x7fff91514fff com.apple.AppKit (6.7.3 - 1138.32) <A9EB81C6-C519-3F29-89F1-42C3E8930281> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit
0x7fff91ae2000 - 0x7fff91dfbff7 com.apple.Foundation (6.7.1 - 833.24) <6D4E6F93-64EF-3D41-AE80-2BB10E2E6323> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x7fff92289000 - 0x7fff922a9fff libsystem_kernel.dylib (1699.24.8 - compatibility 1.0.0) <C56819BB-3779-3726-B610-4CF7B3ABB6F9> /usr/lib/system/libsystem_kernel.dylib"Enable Related Files" isn't the fix for this issue. It's slightly different - I wanted to still load related files, but only the ones local to the file I was editing.
The fix was to use DW's Resolve To IP Address feature. It required adding a registry value - it fixed the issue straight away
This Adobe support doc helped: http://kb2.adobe.com/cps/887/cpsid_88742.html -
BPM data increase very fast and want to get suggestion about BPM capacity
Dear BPM Experts:
I meet a problem with BPM capacity problem. My customer using BPM 11g and every day they
Have 1000 new process,every process have 20-30 tasks,they find the data increase very fast,about 1G/day.
We have done a test about BPM capacity, I create a new simple process named simpleProcess.
which only have three input field, I use API to initiate the task and submit to the next
person.
we using dev_soainfra tablespace, and we set the default audit level, after insert 5000 task, we find dev_soainfra is reach 362.375M,
So as assume 30000 task will using 362*6=2G database spaces,and because in next phases,my customer want
To push BPM platform to more customers, which means more and more customer will using this platform,so
I want to ask is it data increase reasonable? Do you have capacity planning guide for BPM 11g? and If I want to reduce
Lower The data increase, how can we do?
We have try to turn the audit log off, but it seems useless, it only save 8% spaces.
Thanks for your help!
EricIt looks like you are writing your data to disk every so often. For that reason, I recommend making it based on the number of samples you have instead of the time. With that you can preallocate your arrays with constants going into the shift registers. You then use Replace Array Subset to update your arrays. When you write to the file, make sure you go back to overwriting the beginning of your array. This will greatly reduce the amount of time you spend reallocating memory and will reduce your memory usage.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines -
Hi guys,
I have a problem with my loaded movie clip, its playing very fast than its suppose to, (using loadMovie) I've tried to adjust the frame rate of the main file and the movie file, but still the problem occurs.... any suggestion on fixing this??If both movies do not have the same frame rate, then they need to be the same.
-
App to record name numbers and email very fast.
Hi everybody,
I will be recording a lot of names numbers and email, and i want to know if there is an app that could help me do that very fast, and export it to excel, csv or vcf file. The contact app seems too slow for what i want to do. Thank you in advance .Hi, thanks for replying. What I mean is an app that allow me to create forms, simple forms that may show like field 1, 2 ,3... then all you have to do is just input and when you get to the end of the row, will start a new row automatically.
-
Mac faster handling large files than a PC?
Hei guys.
Don't want to start a OS wars discussion here.
But I just wonder...
I have a pretty fast quad core laptop with 8GB Ram and Windows 7 (64bit) and still when it comes to bigger files it doesnt
handle then much faster then before I think. (Or am I spoilt already?)
Does anybody has some experience with the latest Mac Book Pro and 8GB ram using Adobe CS and bigger files everyday
(files about the size of 500mb to 1gb in PS e.g.) in comparison of a similar Windows machine?
Thanks for you replies
BjörnMac's have faster interfaces like Firefire 800 and Expressslots that allow eSATA devices for very fast transfer speeds.
Far as internal storage, SSD's and RAID setups are just the same as on PC's.
What it is most likely is that PC's are built a lot more cheaply on widespread scale, Mac's are used a lot in the video and creative fields where high output/input is reguired.
So by default a lot of Mac's come with faster interfaces, and by default a lot of PC's come with slow interfaces.
Also Windows PC's requires anti-malware running, constant defragging nad other things to keep the performance up which isn't too much the case with Mac's.
Someone who knows what they are doing can make either machine very fast
Maybe you are looking for
-
I can no longer access my diary
I removed my photo album and photo strwm, I had a soft as well as hard reset but still I have no access to my diary on iPad2 Wifi 16 GB iOS 7.1.1. I tap the icon, I see my diary for less than a second and then it is gone. I combine 2 hotmail accounts
-
Interactive pdf form with text fields. Chinese fonts
Hi All We have designed an interactive pdf with text fields. We need the text fields to be in a Chinese font. I understand that this cant be done in Indesign (change the inputting font to Chinese) I have moved the file over to Acrobat Pro and used th
-
Why does latest version of Adobe flash player starts multiple copies
When I go to a page that uses adobe flash player Firefox freezes. When I open up task manager there are up to 10 copies of flash player running and I have to close them manually. What is going on?
-
Collecting images from imovie/idvd project
Though now I know I should have collected all images in a folder before using in imovie project, is there any way to collect them back from the project now. I wanted to put all images used on a disc for my records.... thanks for any suggestions.
-
Converting .mov to .flv
I recently converted a .mov file to a flv. When I upload the original .mov to the flv converter, the compress audio box is grayed-out. The original mov has audio, but when I covert the mov, the new flv file does not have it. How do I get the audio to