Memory handling for big footage, wrong settings, bug?
Hi
I am getting huge time differences in rendering, a full res render takes ca 100 times longer than a half res, where I would normally expect maybe 4 times longer.
Here's the details:
I'm making an 30 sec animation in NTSC, the material is high resolution illustrations in .psd format with lots of layers. I parent and animate the layers in after effects, ie no cell animation or sequences.
A couple of characters with loose limbs and some backgrounds, and the comp is probably a few hundred layers nested within comps etc.
All this is fine when previewing in half resolution, it takes a little while for the program to load all the layers for every new scene, and then it ticks along good, since it's static illustrations with position and rotation keys.
Previewing the 30 secs in half res takes a couple of minutes tops. Totally fine.
But when switching to full res everything goes unbelievably slow. It takes more than 1 hour 30 minutes. The exact same animation that previewed in 1-2 minutes in half res.
I am using
AE CS3, 8.0.2.27
8-core Mac Pro, Leopard 10.5.3
10 GB Ram
Multiprocessor rendering is switched on.
I see all the AE instances in the Activity Monitor, most of the time they're working on less than 10%.
So I'm guessing the cache is sufficient for the half res stuff, and for full res it has to load everything for every single frame? Still though, it seems that it's even slower than that. It seems like there's some sort of bottle neck.
Some good advice would be greatly appreciated.
Thanks, L
Turn off multiprocessing. Loading and unloading your hunky files may take way longer than the actual processing. For the half res previews, they would be loaded from the cache in many situations, thus not affecting I/O speed at all. Also your math is not really right. Depending on how an effect works, render times can go up indefinitely, e.g. if it works strictly pixel-per-pixel insrtead of a coherent single one-pass buffer. In any case, the res is 2^2 times as high, so a multiplier of 8 is to be expected on memory consumption and possibly processing as well... Not sure where your project goes wrong, though, would require soem more info about footage types, blendmodes, effects and so on.
Mylenium
Similar Messages
-
Unaccounted for Memory is too big and lead to Native Memory Issuse.
In our server, after running 1 month, Unaccounted memory will increase up to 500m or higher. And the native memory will be big. So it lead to OOM. Below are one sample:
j2eeapp:jhf1wl101:root > jrcmd 27398 print_memusage
27398:
[JRockit] memtrace is collecting data...
[JRockit] *** 19th memory utilization report
(all numbers are in kbytes)
Total mapped ;;;;;;;5100644
; Total in-use ;;;;;;4038952
;; executable ;;;;; 75968
;;; java code ;;;; 23680; 31.2%
;;;; used ;;; 21833; 92.2%
;; shared modules (exec+ro+rw) ;;;;; 4858
;; guards ;;;;; 5928
;; readonly ;;;;; 0
;; rw-memory ;;;;;3986664
;;; Java-heap ;;;;3145728; 78.9%
;;; Stacks ;;;; 126050; 3.2%
;;; Native-memory ;;;; 714885; 17.9%
;;;; java-heap-overhead ;;; 99596
;;;; codegen memory ;;; 1088
;;;; classes ;;; 166656; 23.3%
;;;;; method bytecode ;; 13743
;;;;; method structs ;; 21987 (#281446)
;;;;; constantpool ;; 72105
;;;;; classblock ;; 7711
;;;;; class ;; 11900 (#21166)
;;;;; other classdata ;; 22950
;;;;; overhead ;; 114
;;;; threads ;;; 960; 0.1%
;;;; malloc:ed memory ;;; 81024; 11.3%
;;;;; codeinfo ;; 4815
;;;;; codeinfotrees ;; 2614
;;;;; exceptiontables ;; 1790
;;;;; metainfo/livemaptable ;; 24519
;;;;; codeblock structs ;; 20
;;;;; constants ;; 33
;;;;; livemap global tables ;; 8684
;;;;; callprof cache ;; 0
;;;;; paraminfo ;; 255 (#2929)
;;;;; strings ;; 24040 (#345745)
;;;;; strings(jstring) ;; 0
;;;;; typegraph ;; 10132
;;;;; interface implementor list ;; 260
;;;;; thread contexts ;; 598
;;;;; jar/zip memory ;; 12204
;;;;; native handle memory ;; 486
;;;; unaccounted for memory ;;; 366520; 51.3%;4.52
---------------------!!!>
"No one is perfect - not even Mac OS X. If a program manages to lock up central processes, a restart will be needed."
That first part of what you said there is indeed true.
But a modern OS is designed to keep processes separated. An application crash
SHOULD NOT
require a complete shut-down and reboot of your system. Yes, the Log-Out/Log back in process might take awhile if you have a particularly bad application crunch, because the OS has detected that something went screwy and is checking to see that the user account is healthy enough to run, and may be fixing some things in the process.
I've run all kinds of not-quite-polished software over the years since my adoption of OS X, and no matter how badly some of it performed nothing ever required me to reboot my system to restore operating health. Now, that's not to say I don't run system maintenance utilities which, after performing their routines, suggest or require a shutdown restart. I usually only do this if I've decided to delete the offending application from my system. (Sidebar: How diligent are you about maintaining the general health of your system through the regular practice of running preventative maintenance routines? Ramon may be along shortly to lay the boiler-plate on you about this :))
Does Photoshop dig its hooks so deeply into the root level of the OS that it could cause the kind of problems you've had? I don't know for sure, but I'd guess that it's possible. And I'd suggest that, if wonky Photoshop behavior can be so bad that it
requires
the user to restart in order to regain operational health, then something is VERY wrong. And I'd go even further out on a limb to guess that this is a fault in Adobe's Photoshop coding, and not in Apple's OS coding. -
How to handle different languages in Illustrator for big clients?
Hi Guys,
I need small suggestio for 'How to handle different languages in illustrator for big clients'. For instance; Arabic, this is a language which needs to be read from right to left instead of left to right. There are other strange languages as well (Cyrillic, Chinese etc.). It happens also that when you copy strange languages from a word file it will not be easy to paste it the correct way in a .ai file. Besides that, it’s also difficult to do a language check when we are not able to read it!. So, to make a long story short, I want to figure out how we can deal with multiple language circles?
Can you some please give me the solution for this...
Thanks in advance...
HARII take it you might come from an arabic background.
Here is how you can help yourself to some degree.
Google has a translation feature and service which is at the moment free. It is excellent.
Secondly if you are working with ME languages you really need the ME version of Illustrator or any other Creative Suites Application in order for it to work properly.
It, is also best to enable the language and the input for that language for your system on the Mac it is easy you go to the Language and text feature in theSystem Preferences, once enabled you can select the fro the menu bar under the American flag if you are in the US. Thst will appear once you have more than one language selected.
You select the language input you need and then a font for that language then paste and edit.
You need fonts on your computer that are of those languages as well.
How to manage this as a work flow well that is something you will have to work on yourself or hire a consultant who specializes in this area.
We had a few visit here when they have come across a snafu. So they exists. -
Revision: 4258
Author: [email protected]
Date: 2008-12-08 16:33:17 -0800 (Mon, 08 Dec 2008)
Log Message:
Bug: LCDS-522 - Add more configurable reconnect handling for connecting up again over the same channel when there is a connection failure/outage.
QA: Yes
Doc: No
Checkintests Passes: Yes
Details:
* Updates to configuration handling code and MXMLC code-gen to support new long-duration reliable reconnect setting.
Ticket Links:
http://bugs.adobe.com/jira/browse/LCDS-522
Modified Paths:
blazeds/trunk/modules/common/src/flex/messaging/config/ClientConfiguration.java
blazeds/trunk/modules/common/src/flex/messaging/config/ClientConfigurationParser.java
blazeds/trunk/modules/common/src/flex/messaging/config/ConfigurationConstants.java
blazeds/trunk/modules/common/src/flex/messaging/config/ServicesDependencies.java
blazeds/trunk/modules/common/src/flex/messaging/errors.properties
Added Paths:
blazeds/trunk/modules/common/src/flex/messaging/config/FlexClientSettings.java
Removed Paths:
blazeds/trunk/modules/core/src/flex/messaging/config/FlexClientSettings.javaRemember that Arch Arm is a different distribution, but we try to bend the rules and provide limited support for them. This may or may not be unique to Arch Arm, so you might try asking on their forums as well.
-
Select for update gives wrong results. Is it a bug?
Hi,
Select for update gives wrong results. Is it a bug?
CREATE TABLE TaxIds
TaxId NUMBER(6) NOT NULL,
LocationId NUMBER(3) NOT NULL,
Status NUMBER(1)
PARTITION BY LIST (LocationId)
PARTITION P111 VALUES (111),
PARTITION P222 VALUES (222),
PARTITION P333 VALUES (333)
ALTER TABLE TaxIds ADD ( CONSTRAINT PK_TaxIds PRIMARY KEY (TaxId));
CREATE INDEX NI_TaxIdsStatus ON TaxIds ( NVL(Status,0) ) LOCAL;
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100101, 111, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100102, 111, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100103, 111, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100104, 111, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (200101, 222, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (200102, 222, NULL);
Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (200103, 222, NULL);
--Session_1 return TAXID=100101
select TAXID from TAXIDS where LOCATIONID=111 and NVL(STATUS,0)=0 AND rownum=1 for update
--Session_2 waits commit
select TAXID from TAXIDS where LOCATIONID=111 and NVL(STATUS,0)=0 AND rownum=1 for update
--Session_1
update TAXIDS set STATUS=1 Where TaxId=100101;
commit;
--Session_2 return 100101 opps!?
--Session_1 return TAXID=100102
select TAXID, STATUS from TAXIDS where LOCATIONID=111 and NVL(STATUS,0)=0 AND rownum=1 for update
--Session_2 waits commit
select TAXID, STATUS from TAXIDS where LOCATIONID=111 and NVL(STATUS,0)=0 AND rownum=1 for update
--Session_1
update TAXIDS set STATUS=1 Where TaxId=100102;
commit;
--Session_2 return 100103This is a bug. Got to be a bug.
This should be nothing to do with indeterminate results from ROWNUM, and nothing to do with read consistency at the point of statement start time in session2., surely.
Session 2 should never return 100101 once the lock from session 1 is released.
The SELECT FOR UPDATE should restart and 100101 should not be selected as it does not meet the criteria of the select.
A statement restart should ensure this.
A number of demos highlight this.
Firstly, recall the original observation in the original test case.
Setup
SQL> DROP TABLE taxids;
Table dropped.
SQL>
SQL> CREATE TABLE TaxIds
2 (TaxId NUMBER(6) NOT NULL,
3 LocationId NUMBER(3) NOT NULL,
4 Status NUMBER(1))
5 PARTITION BY LIST (LocationId)
6 (PARTITION P111 VALUES (111),
7 PARTITION P222 VALUES (222),
8 PARTITION P333 VALUES (333));
Table created.
SQL>
SQL> ALTER TABLE TaxIds ADD ( CONSTRAINT PK_TaxIds PRIMARY KEY (TaxId));
Table altered.
SQL>
SQL> CREATE INDEX NI_TaxIdsStatus ON TaxIds ( NVL(Status,0) ) LOCAL;
Index created.
SQL>
SQL>
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100101, 111, NULL);
1 row created.
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100102, 111, NULL);
1 row created.
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100103, 111, NULL);
1 row created.
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (100104, 111, NULL);
1 row created.
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (200101, 222, NULL);
1 row created.
SQL> Insert into TAXIDS (TAXID, LOCATIONID, STATUS) Values (200102, 222, NULL);
1 row created.
SQL> commit;
Commit complete.
SQL> Original observation:
Session1>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND ROWNUM = 1
6 FOR UPDATE;
TAXID
100101
Session1>
--> Session 2 with same statement hangs until
Session1>BEGIN
2 UPDATE taxids SET status=1 WHERE taxid=100101;
3 COMMIT;
4 END;
5 /
PL/SQL procedure successfully completed.
Session1>
--> At which point, Session 2 returns
Session2>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND ROWNUM = 1
6 FOR UPDATE;
TAXID
100101
Session2>There's no way that session 2 should have returned 100101. That is the point of FOR UPDATE. It completely reintroduces the lost UPDATE scenario.
Secondly, what happens if we drop the index.
Let's reset the data and drop the index:
Session1>UPDATE taxids SET status=0 where taxid=100101;
1 row updated.
Session1>commit;
Commit complete.
Session1>drop index NI_TaxIdsStatus;
Index dropped.
Session1>Then try again:
Session1>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND ROWNUM = 1
6 FOR UPDATE;
TAXID
100101
Session1>
--> Session 2 hangs again until
Session1>BEGIN
2 UPDATE taxids SET status=1 WHERE taxid=100101;
3 COMMIT;
4 END;
5 /
PL/SQL procedure successfully completed.
Session1>
--> At which point in session 2:
Session2>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND ROWNUM = 1
6 FOR UPDATE;
TAXID
100102
Session2>Proves nothing, Non-deterministic ROWNUM you say.
Then let's reset, recreate the index and explicity ask then for row 100101.
It should give the same result as the ROWNUM query without any doubts over the ROWNUM, etc.
If the original behaviour was correct, session 2 should also be able to get 100101:
Session1>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND taxid = 100101
6 FOR UPDATE;
TAXID
100101
Session1>
--> same statement hangs in session 2 until
Session1>BEGIN
2 UPDATE taxids SET status=1 WHERE taxid=100101;
3 COMMIT;
4 END;
5 /
PL/SQL procedure successfully completed.
Session1>
--> so session 2 stops being blocked and:
Session2>SELECT taxid
2 FROM taxids
3 WHERE locationid = 111
4 AND NVL(STATUS,0) = 0
5 AND taxid = 100101
6 FOR UPDATE;
no rows selected
Session2>Of course, this is how it should happen, surely?
Just to double check, let's reintroduce ROWNUM but force the order by to show it's not about read consistency at the start of the statement - restart should prevent it.
(reset, then)
Session1> select t.taxid
2 from
3 (select taxid, rowid rd
4 from taxids
5 where locationid = 111
6 and nvl(status,0) = 0
7 order by taxid) x
8 , taxids t
9 where t.rowid = x.rd
10 and rownum = 1
11 for update of t.status;
TAXID
100101
Session1>
--> Yes, session 2 hangs until...
Session1>BEGIN
2 UPDATE taxids SET status=1 WHERE taxid=100101;
3 COMMIT;
4 END;
5 /
PL/SQL procedure successfully completed.
Session1>
--> and then
Session2> select t.taxid
2 from
3 (select taxid, rowid rd
4 from taxids
5 where locationid = 111
6 and nvl(status,0) = 0
7 order by taxid) x
8 , taxids t
9 where t.rowid = x.rd
10 and rownum = 1
11 for update of t.status;
TAXID
100102
Session2>Session 2 should never be allowed to get 100101 once the lock is released.
This is a bug.
The worrying thing is that I can reproduce in 9.2.0.8 and 11.2.0.2. -
Memory modules for Msi Big Bang trinergy
Hey forum,
i want to buy some memory modules for my trinergy mobo because the ones i had earlier also proved to be incompatible. I would like to ask you if there are any memory modules besides the ones recommended from the MSI site concerning my mobo. I am looking for a 2x2 gb ram kit with a heat dispenser so it can work around 1600mHz. Thank you in advance
P.S Is the list on the site the only one? i mean are there any updated versions of it with more high end memory modules tested?Ideally you would provide us with your fullsystem specs first >>Posting Guide<<
From a memory perspective, the reason these modules have heatspreaders is mostly due to marketing and to impress the potential buyers. The secondary reason is that most manufacturers sell you overvolted and overclocked 1066 or 1333 chips on a 1600 marketed module that needs 1,65V instead of the standard 1,5V. Your memory controller is part of the CPU and natively only supports 1333 at 1,5V.
If you really insist, then at least get yourself a 1600 kit that does this speed at 1,5V. One of the modules that seem to work well are the CMZ4GX3M2A1600C9 from Corsair. From a user perspective, the mem modules from Crucial that you see in my signature come highly recommended as they have been proven to work on the P55 platform whenever used. -
Memory limitation for session object!
what is the memory limitation for using session objects?
venuas already mentioned there is no actual memory limitation within the specification, it only depends on the jvm's settings
how different app-server handle memory management of session objects is another part of the puzzle, but in general you should not have problems in writting any object to the session.
we had the requirement once to keep big objects in session, we decided to do a ResourceFactory that returns us the objects, and only store unique-Ids into the session.
We could lateron build on this and perform special serialization tasks of big objects in the distributed environment.
Dietmar -
Suggestions for increased performance and better memory utilization for FTE
We all know that there is a pretty big downside to creating potentially thousands of DisplayObjectContainers (TextLines).
o - they are slow to create
o - they may be short lived
o - they occupy lots of memory
o - they may need to be generated frequently
Currently, there is only one way to find out all the information you need and that is to create all the TextLines for a given data set.
This means that FTE does not scale well. It becomes very slow for large data sets that need to be regenerated.
I am proposing a possible solution and hope that Adobe will consider it.
If we had the right tools we could create a sliding window of display objects. With large data sets only a fraction of the content is actually visible. So only the objects that are actually visible need to be created. There is no way to do this efficiently with FTE at the present time.
So we need a few new methods and classes that parallel what you already have.
New Method 1)
TextBlock.getTextLineInfo (width:Number, lineOffset:Number, fitSomething:Boolean) : TextLineInfo
This method returns simple information about all the lines in a text block. No display objects are generated.
class TextLineInfo
public var maxWidth:Number; // maximum width of all the textlines in the textBlock
public var totalHeight:Number; // totalHeight of all the textlines in the textBlock
public var numLines:int; // number of lines in the lineInfo Array
public var lineInfo:Array; // array of LineInfo items for each textline
class LineInfo // sample - more or less may be needed
public var rawTextLength:int;
public var textWidth:Number;
public var textHeight:Number;
public var ascent;
public var descent;
public var textBlockBeginIndex:int;
Now getTextLineInfo needs to be as fast as possible. Find an advanced game programmer to optimize, write it in assembler, put grease on it.... do whatever it takes to make it fast.
New Method 2)
TextBlock.createTextLines (textLineInfo:TextLineInfo, begIdx:int, endIdx:int) : Array
Creates and returns an Array of TextLine objects based on the previously created TextLineInfo. A range can be specified.
It should be obvious that the above functions will improve the situation. Since this parallels what you already have it should not be earth shaking to implement.
New Display Object type
Much of the time you do not need a full blown Display Object Container for a TextLine. I suggest an additional lightweight TextLine class. A good parallel would be similiar to the difference between Sprite and Shape.
Now I have some done some testing with this idea. Since you cannot implement this fully as it stands, I had to make some concessions. This sample contains 100,004 characters of data. You can resize it and it will always be fast. This sample only creates the visible portion of the display, but you may scroll into view the invisible portions. Each time the page is resized, it will jump back to the top, because of the limitations of FTE currently.
The sample also contains a caret and allows the selection of an area but no editng, copy, paste etc., is available for this test.
If I did not do special handling for this, it would lock up for sure and be very user unfreindly.
Now it takes a moment to load 100k into the TextElement.so there may be a pause before you see that data. I may need to improve this. Once loaded it performs quite well.
Without the above or similiar optimizations. FTE is just not going to scale up very well at all.
DonJeff, I don't see how a fix for that bug means waiting for a major release. It seems it just does not work as you expected and perhaps documented. It should not break any code, should it ? This seems a somewhat major improvement.
Using recreateTextLine in 10.1 I have these results so far:
My test case is 668 lines and using my slow test machine so the timing can be picked up.
When using just createTextLine and creating all text lines:
......Using removeChildAt to first remove all the old textLines then create all textLines:
..........it takes ~670ms
......Removing all children at once by removing the container then create all textLines:
..........it takes ~570ms
Using recreateTextLine, getChildAt, then create all textLines:
..........it takes ~670ms
So recreateTextLine does not improve performance it seems, just better memory I suppose.
Don -
Unable to get automatic event handling for OK button.
Hello,
I have created a form using creatobject. This form contains an edit control and Search, Cancel buttons. I have set the Search buttons UID to "1" so it can handle the Enter key hit event. Instead its caption changes to Update when i start typing in the edit control and it does not respond to the Enter key hit. Cancel happens when Esc is hit.
My code looks like this -
Dim oCreationParams As SAPbouiCOM.FormCreationParams
oCreationParams = SBO_Application.CreateObject(SAPbouiCOM.BoCreatableObjectType.cot_FormCreationParams)
oCreationParams.UniqueID = "MySearchForm"
oCreationParams.BorderStyle = SAPbouiCOM.BoFormBorderStyle.fbs_Sizable
Dim oForm As SAPbouiCOM.Form = SBO_Application.Forms.AddEx(oCreationParams)
oForm.Visible = True
'// set the form properties
oForm.Title = "Search Form"
oForm.Left = 300
oForm.ClientWidth = 500
oForm.Top = 100
oForm.ClientHeight = 240
'// Adding Items to the form
'// and setting their properties
'// Adding an Ok button
'// We get automatic event handling for
'// the Ok and Cancel Buttons by setting
'// their UIDs to 1 and 2 respectively
oItem = oForm.Items.Add("1", SAPbouiCOM.BoFormItemTypes.it_BUTTON)
oItem.Left = 5
oItem.Width = 65
oItem.Top = oForm.ClientHeight - 30
oItem.Height = 19
oButton = oItem.Specific
oButton.Caption = "Search"
'// Adding a Cancel button
oItem = oForm.Items.Add("2", SAPbouiCOM.BoFormItemTypes.it_BUTTON)
oItem.Left = 75
oItem.Width = 65
oItem.Top = oForm.ClientHeight - 30
oItem.Height = 19
oButton = oItem.Specific
oButton.Caption = "Cancel"
oItem = oForm.Items.Add("NUM", SAPbouiCOM.BoFormItemTypes.it_EDIT)
oItem.Left = 105
oItem.Width = 140
oItem.Top = 20
oItem.Height = 16
Dim oEditText As SAPbouiCOM.EditText = oItem.Specific
What changes do i have to make to get the enter key to work?
Thanks for your help.
Regards,
SheetalHello Felipe,
Thanks for pointing me to the correct direction.
So on refering to the documentation i tried out a few things. But I am still missing something here.
I made the following changes to my code -
oForm.AutoManaged = True
oForm.SupportedModes = 1 ' afm_Ok
oItem = oForm.Items.Add("1", SAPbouiCOM.BoFormItemTypes.it_BUTTON)
oItem.Left = 5
oItem.Width = 65
oItem.Top = oForm.ClientHeight - 30
oItem.Height = 19
oItem.SetAutoManagedAttribute(SAPbouiCOM.BoAutoManagedAttr.ama_Visible, 1, SAPbouiCOM.BoModeVisualBehavior.mvb_Default)
oButton = oItem.Specific
oButton.Caption = "OK"
AND
oForm.Mode = SAPbouiCOM.BoFormMode.fm_OK_MODE
oItem = oForm.Items.Add("1", SAPbouiCOM.BoFormItemTypes.it_BUTTON)
oItem.Left = 5
oItem.Width = 65
oItem.Top = oForm.ClientHeight - 30
oItem.Height = 19
oItem.AffectsFormMode = False
I get the same behaviour OK button changes to update and enter key does not work.
Could you please tell me find what is it that i am doing wrong?
Regards,
Sheetal -
Memory upgrade for KM3M-V mobo (aka MS 7061-01S)
Hi, this is my first post. I've read the guidelines and searched for an existing topic but can't find one that quite matches what I want to ask, so I'm posting a new topic - please forgive me I have erred!
I have a TIME Computers desktop (please don't ask why), mobo being a MSI KM3M-V (also known as MS7061-01S), which is based on VIA KM266 chipset. CPU = Sempron 2400+.
I've just installed a new graphics card, an ATI All-in-Wonder 9200SE, with associated drivers and software including Catalyst (which menat I had to instal MS .net framework too). This seems to be working OK but overall speed not great. (Also, for some reason I don't seem to be getting any sound at all, but I expect I can fix that with a bit of trail and error.)
My next step is to increase the memory. Existing memory installed is one 256MB UDIMM module, but I'm aiming to remove it and instal 2 x 1GB UDIMM modules, which I know the mobo can take. Naturally I'm looking to get the maximum available performance (subject to stability considerations), so I'm looking to go for the highest bandwidth/speed the system can take. This is where it gets tricky. Having searched Crucial, Kingston and other memory configurators plus various other tech articles and indeed MSI's own website, I've reached the conclusion that I can select memory in PC2100/ 2700/ etc up to 4000 or even higher provided that the maximum memory chip size (density) does not exceed 128MB.
But I'm not sure about the speed limitation: is there any point in buying DIMMs capable of DDR333 or 400 - will my chipset limit the speed to 266MHz anyway? I'll be grateful for any expert views.
Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 2 (2600.xpsp_sp2_rtm.040803-2158)
Language: English (Regional Setting: English)
System Manufacturer: Time Computers
System Model: KM266-8237
BIOS: Phoenix - AwardBIOS v6.00PG
Processor: AMD Sempron(tm) 2400+, MMX, 3DNow, ~1.7GHz
Memory: 192MB RAM
Page File: 135MB used, 331MB available
Windows Dir: C:\WINDOWS
DirectX Version: DirectX 9.0c (4.09.0000.0904)
DX Setup Parameters: None
DxDiag Version: 5.03.2600.2180 32bit UnicodeThanks for the quick reply, Exo. I agree with you 100% that it's best to go for (at least) PC3200 with a view to a likely future upgrade. As I see it, it makes sense to go for the highest spec available at reasonable cost - so I would probably aim for PC4000 rather than 3200. There doesn't seem to be any downside in doing so (provided one is happy to tweak the BIOS where necessary), and even the MSI MOBO spec seems to endorse this view - see bit picked out in red below:
http://www.msicomputer.com/product/p_spec.asp?model=KM3M-V&class=mb
Main Memory
• Supports four memory banks by using two 184-pin DDR DIMMs
• Supports a maximum memory size of 2GB.
• Supports 2.5v DDR SDRAM DIMM
Due to the High Performance Memory design, motherboards or system configurations may or may not operate smoothly at the JEDEC (Joint Electron Device Engineering Council) standard settings (BIOS Default on the motherboard) such as DDR voltage, memory speeds and memory timing. Please confirm and adjust your memory setting in the BIOS accordingly for better system stability.
Example: Kingston HyperX DDR500 PC4000 operates at 2.65V, 3-4-4-8, CL=3.
For more information about specification of high performance memory modules, please check with your Memory Manufactures for more details. -
Error in creating IO file handles for job (number 3152513)
Hi All -
I am using Tidal 5.3.1.307. And the Windows agent that is running these jobs is at 3.0.2.05.
Basically the error in the subject was received when starting a particular job once it was cancelled and a couple of other different jobs a few days before. These jobs have run successfully in the past.
This particular job was running for 500+ minutes when it should run at an estimated 40 minutes. At that time it would not allow for a re-start of the job, it just stayed in a launched status.
Trying to figure out what causes this error.
Error in creating IO file handles for job 3152513
Note - from that being said we were to see 2 instances of this process running at the same time, we noticed some blocking on the DB side of things.
Trying to figure out if this is a known tidal issue or a coding issue or both.
Another side note, after cancelling the 2nd rerun attempt the following error was encountered: Error activating job, Duplicate.
When we did receive the Error creating IO file, the job did actually restart, but Tidal actually lost hooks into it and the query was still running as an orphan on the db server.
Thanks All!The server to reboot is the agent server. You can try stopping the agent and then manually deleting the file. That may work. When the agent is running the agent process may keep the file locked, so rebooting may not be sufficient.
The numerical folders are found as sub-directories off of the services directory I mentioned. I think the numbers correspond to the job type, so one number corresponds to standard jobs, another to FTP jobs. I'd just look in the numbered directories until you find a filename matching the job number.
The extensions don't really matter since you will want to delete all files that match your job number. There should only be one or two files that you need to delete and they should all be in the same numbered sub-directory.
As to the root cause of the problem, I can't really say since it doesn't happen very often. My recollection is that it is either caused by a job blowing up spectacularly (e.g. a memory leak in the program being launched by Tidal) or someone doing something atypical with the client. -
"Message Rejection Handler" for the file/ftp adapter using fault policy
Hi guys,
We are trying to implement "Message Rejection Handler" for the file/ftp adapter using following fault policy configuration.
Fault Policy:
`````````````
<?xml version='1.0' encoding='UTF-8'?>
<faultPolicies xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<faultPolicy version="2.0.1" id="ProcessNameGenericPolicy"
xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.oracle.com/bpel/faultpolicy"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Conditions>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:remoteFault">
<condition>
<action ref="ora-retry"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:bindingFault">
<condition>
<action ref="ora-rethrow-fault"/>
</condition>
</faultName>
</Conditions>
<Actions>
<Action id="ora-retry">
<retry>
<retryCount>3</retryCount>
<retryInterval>1</retryInterval>
<retryFailureAction ref="ora-rethrow-fault"/>
</retry>
</Action>
<Action id="ora-rethrow-fault">
<rethrowFault/>
</Action>
<Action id="ora-human-intervention">
<humanIntervention/>
</Action>
<Action id="ora-terminate">
<abort/>
</Action>
</Actions>
</faultPolicy>
<faultPolicy version="2.0.1" id="ProcessNameHumanInterventionPolicy"
xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.oracle.com/bpel/faultpolicy"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Conditions>
<faultName xmlns:medns="http://schemas.oracle.com/mediator/faults"
name="medns:mediatorFault">
<condition>
<test>contains($fault.mediatorErrorCode, "TYPE_TRANSIENT")</test>
<action ref="ora-retry-with-intervention"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:remoteFault">
<condition>
<action ref="ora-retry-with-intervention"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:bindingFault">
<condition>
<action ref="ora-rethrow-fault"/>
<!--<action ref="ora-retry-with-intervention"/>-->
</condition>
</faultName>
</Conditions>
<Actions>
<Action id="ora-retry-with-intervention">
<retry>
<retryCount>3</retryCount>
<retryInterval>1</retryInterval>
<retryFailureAction ref="ora-human-intervention"/>
</retry>
</Action>
<Action id="ora-retry">
<retry>
<retryCount>3</retryCount>
<retryInterval>1</retryInterval>
<retryFailureAction ref="ora-rethrow-fault"/>
</retry>
</Action>
<Action id="ora-rethrow-fault">
<rethrowFault/>
</Action>
<Action id="ora-human-intervention">
<humanIntervention/>
</Action>
<Action id="ora-terminate">
<abort/>
</Action>
</Actions>
</faultPolicy>
<faultPolicy version="2.0.1" id="RejectedMessages">
<Conditions> <!-- All the fault conditions are defined here -->
<faultName xmlns:rjm="http://schemas.oracle.com/sca/rejectedmessages" name="rjm:PartnerLinkName">
<!-- local part of fault name should be the service name-->
<condition>
<action ref="writeToFile"/> <!-- action to be taken, refer to Actions section for the details of the action -->
</condition>
</faultName>
</Conditions>
<Actions> <!-- All the actions are defined here -->
<Action id="writeToFile">
<fileAction>
<location>Server/Loc/path</location>
<fileName>Rejected_AJBFile_%ID%_%TIMESTAMP%.xml</fileName>
</fileAction>
</Action>
</Actions>
</faultPolicy>
</faultPolicies>
Fault Binding:
``````````````
<?xml version='1.0' encoding='UTF-8'?>
<faultPolicyBindings version="2.0.1"
xmlns="http://schemas.oracle.com/bpel/faultpolicy"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<composite faultPolicy="ProcessNameGenericPolicy"/>
<service faultPolicy="RejectedMessages">
<name>PartnerLinkName</name>
</service>
<reference faultPolicy="RejectedMessages">
<name>PartnerLinkName</name>
</reference>
</faultPolicyBindings>
We have SyncFileRead partner link.
The expectation is: when the message read by SyncFileRead partner link is rejected,
that rejected message should come to particular directory in the server.
Could you please help us fixing this.
TIA.HI..
Have a look at this blog :
3) Error: HTTP_RESP_STATUS_CODE_NOT_OK 401 Unauthorized
Description: The request requires user authentication
Possible Tips:
u2022 Check XIAPPLUSER is having this Role -SAP_XI_APPL_SERV_USER
u2022 If the error is in XI Adapter, then your port entry should J2EE port 5<System no>
u2022 If the error is in Adapter Engine
u2013then have a look into SAP note- 821026, Delete the Adapter Engine cache in transaction SXI_CACHE Goto --> Cache.
u2022 May be wrong password for user XIISUSER
u2022 May be wrong password for user XIAFUSER
u2013 for this Check the Exchange Profile and transaction SU01, try to reset the password -Restart the J2EE Engine to activate changes in the Exchange Profile After doing this, you can restart the message
Http* Errors in XI
Thanks,
Pooja -
Oracle 11g - Memory used for sorting
Hi everyone,
I would like to know how I could analyze memory used for sorting in Oracle 11g. When I run the below query, it returns 1531381.
select value from v$sysstat where name like 'sorts (memory)';But when I check sort_area_size parameter from v$parameter, it returns 65536. Does it mean my database is using more memory for sorting than sort_area_size. Or is the way I interpret v$sysstat view and sort_area_size wrong? What is the best way to monitor the memory usage for sorting? Thanks in advance.
Regards,
K.H
Edited by: K Hein on Apr 5, 2012 8:16 PMcheck the valuse of pga_aggregate_target
http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams157.htm
Note:
Oracle does not recommend using the SORT_AREA_SIZE parameter unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL working areas by setting PGA_AGGREGATE_TARGET instead. SORT_AREA_SIZE is retained for backward compatibility.
What is the best way to monitor the memory usage for sorting? try v$sort_usage
or v$tempseg_usage
col sid_serial for a44
col size for a22
col SID_SERIAL for a22
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||' MB' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,a.osuser,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.blocks; -
HTTP handler for starting an external service cannot be read
Hi,
When i execute the work item from Business Work place and also from the UWL in Enterprise portal of the task TS21300098 for HR Process Requisition workflow.
In the Error is says
HTTP handler for starting an external service cannot be read
Message no. SWK045
Diagnosis
This work item is a link to a HTTP service. To start the service, a launch handler must be known to the SAP System.
However, the system could not find a launch handler.
System Response
The workflow system cannot start execution of the work item.
Procedure
Contact your workflow administrator:
In Customizing you must maintain a launch handler for HTTP-supported dialog services.
I have checked the config under WF_HANDCUST transaction and made the launch Handler settings generated automatically.
But still the problem occurs.
Any Suggesions and help?
Thanks & Regards
Sumanthhi Guys,
I have got the same issue. This blog helped me with tcode WF_HANDCUST. I have generated the url
http://waspgh.kcc.com:8083/sap/bc/webflow/wshandler-->Click on Generate URL and click on distribution.
Immediately u will get click on Test url . Click Test url . Then It is not successful. It stopped at
http://waspgh.kcc.com:8083/sap/bc/webflow
Getting http page error. So use tcode sicf >sap>bc-->webflow service and activate it.
Then test the connection. It will be successfull.
http://waspgh.kcc.com:8083/sap/bc/webflow/wshandler?ping=true&sap-client=400
Handler test
Test successful
Thanks,
Shankar -
EmailRecieved Event handler for incoming email in Sharepoint 2013
Hi,
I am developing custom event handler to enable incoming email for custom document library. i have couple of questions.
1. Once i attached the event handler, i could see incoming email settings for the custom document library, but it displays only 2 options as below :
1. Allow this document library to recieve email
Yes No
2. E-mail address
All other properties are not displaying. Is this normal behaviour on custom event handler for custom document library or any issue anywhere?
2. So i have given other properties using powershell as below:
$list = $web.lists["invoice Documents"]
$list.EmailAlias ="TestDocument"
$list.EnableAssignToEmail = $true
$list.rootFolder.Properties["vti_emailusesecurity"] = 1
$list.rootFolder.Properties["vti_emailsaveattachments"] = 1
$list.rootFolder.Properties["vti_emailattachmentfolders"] = "root"
$list.rootFolder.Properties["vti_emailoverwrite"] = 0
$list.rootFolder.Properties["vti_emailsavemeetings"] = 0
$list.rootFolder.Properties["vti_emailsaveoriginal"] = 0
$List.RootFolder.Update();
$list.Update();
here i have given vti_emailusesecurity as 1, so it should allow incoming email for the user who has permission for the list.
But any user from the domain is sending mail to this list, the Emailrecieved event handler is triggered. I am expecting that this event handler should not be triggered and expecting access denied error in the ULS Log. But that is not happening, and emailrecived
is triggered and email is delivering to this address successfully.
Can you please anyone help if experience on this?
Thanks
Sathyahttp://www.coretekservices.com/2012/01/26/sharepoint-content-organizer-%25e2%2580%2593-emailing-your-drop-off-library-and-getting-it-to-work
Central Administration > Monitoring > Review Job Definitions (under Timer Jobs) > Content Organizer Processing
Also check below:
http://tutorial.programming4.us/windows_server/SharePoint-2010---Content-Organizer-as-a-Document-Routing-Tool.aspx
If this helped you resolve your issue, please mark it Answered
Maybe you are looking for
-
will my photos, music, etc get pushed to his phone if his photostream, icloud is on etc? how can i check which devices have access to my icloud??? thx
-
Monitering the Message Mapping
Hi all,<br> I want to see the message payload exactly before the mapping and exactly after mapping (through sap GUI or RWB). <br> We are encrypting on target side so i need to see the target structure before encryption.
-
Is there any way to do a live video shoot using two cameras with FCX? If so how?
-
Why does stlport4_dbg invalidate on "clear()"
Hallo all! 'stlport4_dbg' has nice features which I would like to use for our debug builds. But when clearing a container (list, set, map), it invalidates the "past the end" iterator. The following code is aborted at the last output line with "CC -li
-
Cip burnt on the logic board, but it still works!
Recently when i was using the MacBook (Late 2011, 13', 2.4MHz i5) without it been plugged in, it turned off all of a sudden with the "chich" sound and a puff of smoke came out. I thought my Mac died. But when i turned it on, it stated to work fine (c