Optimizing PP performance/setting for my build
Hi everyone,
I'd really appreciated recommended PP setting/project settings for my pc build:
i7 4790 3,6 to 4,0 Ghz
32 GB RAM 1866 Mhz
GTX 970 4 GB VRAM
128 GB SSD for OS, Apps
256 GB SSD for Cache and Preview
512 GB SSD for Media footage, Projects, Auto-Save
1 TB WD 7.200 Backup and Exports
Windows 7 Prof.
latest Nvidia drivers
latest AE CC 2014 version
Thank you in advance for help.
Best,
Alex
See this page for resources about making After Effects work faster: http://adobe.ly/eV2zE7
See this page for information about memory and improving performance in After Effects: http://adobe.ly/dC0MdR
See this page for information about hardware for After Effects: http://adobe.ly/pRYOuk
Similar Messages
-
Optimizing AE performance/settings for my build
Hi everyone,
I'd really appreciated recommended setting/project settings for my pc build:
i7 4790 3,6 to 4,0 Ghz
32 GB RAM 1866 Mhz
GTX 970 4 GB VRAM
128 GB SSD for OS, Apps
256 GB SSD for Cache and Preview
512 GB SSD for Media footage, Projects, Auto-Save
1 TB WD 7.200 Backup and Exports
Windows 7 Prof.
latest Nvidia drivers
latest AE CC 2014 version
Thank you in advance for help.
Best,
AlexSee this page for resources about making After Effects work faster: http://adobe.ly/eV2zE7
See this page for information about memory and improving performance in After Effects: http://adobe.ly/dC0MdR
See this page for information about hardware for After Effects: http://adobe.ly/pRYOuk -
Optimal Scratch Disk Setting for Notebook/Macbook Pro Users
After doing a ton of research, I came across THIS guidline for different scratch disk setups but I am still not sure what would be the best setup for me.
Here is what I am working with for my Current Projects Work Station
2011 i7 Macbook Pro with SSD as my main drive and 750GB 5400rpm HD in the optical bay.
OWC Elite Mini and Dual Mini in Raid0 for Media Storage. Both 7200rpm Drives
Currently I have my scratch disk set to the following:
SSD: Startup drive, OS, Programs, Project Files and Exports
750GB HD: Partitioned - 250GB to Backup SSD and 500GB for Photo storage and PP Scratch Disk Video, Audio and Previews
OWC Minis: Media Storage for Current Projects
My question is, what can I do to optimize my premiere pro settings?
Should I move my scratch disk to my external HD since its a faster drive?I have a very similar setup but am using a couple pieces of new gear. Wondering if Harm or any of the rest of you have any advice.
2011 i7 Macbook Pro with 500GB SSD as my main drive and 750GB 5400rpm HD in the optical bay. The SSD is a new
Samsung Electronics 840 Pro Series 2.5-Inch 512 SATA_6_0_gb Solid State Drive MZ-7PD512BW
4th-generation 3-Core Samsung MDX Controller ensures sustained performance under the most punishing conditions
Backed by an industry-leading five-year limited warranty
I have various LaCie drives that contain projects, media, exports connected via LaCie Tbolt esata adapter. So these are drives I've had for years that are now running at 3G esata speeds.
I'm not using the 2nd internal drive for anything since it is 5400rpm. I just use it for backups, photo, music storage.
So I ususally work small projects with the projects, media, exports, renders all pointing back to the esata connected external LaCies.
Should I point media cache to the 500GB SSD my internal c drive? The answer to this is still confusing since I've read many conflicting posts in various places.
Any other suggestions for now with this setup? (External Tbolt connected raid not yet in my budget.) -
Optimal Scratch Disk Setting for Notebook/Macbook Pro Users 2013??
Hey Folks,
I have read a few previous posts from other users but they were on the older model MBPs. Now that the USB 3 option is there and there are other options for Thunderbolt storage I was wondering what the best solution for disk setups would be. I have no problems removing the optical drive and installing another HD in it's place so please feel free to use that option in any configuration solutiions you may offer.
Thanks in advance for your input to this discussion.Here's what I recommend as a starting point for a good editing experience.
C: OS/Programs
D: Projects
E: Cache/Scratch
F: Media
E: Exports
Here's what I recommend as a bare minimum:
C: OS/Programs
D: Projects/Cache/Scratch
E: Media/Exports
In all cases, implement some sort of backup. -
How to build dynamic databases(record set) for mobile?
Hi All ,
i have an applocation that needs to personalize the data for users .
i have the data available , but dont know how to personalize it fot users to download special version of my application.
how to build dynamic databases(record set) for mobile?In the load rule in the dimension build settings you would need to go to the tab dimension definition, choose the time dimension and right click on it. Select Edit properties. If you have not done so, set the dimension to be the time dimension. Then go to the generations/levels tab and set the generation names you need. For example if you want YTD, you would set the generation name to Year, if you want QTD set it to Quarter. You would set the number to the generation number that coorisponds to the generation. The DBAG has the list of names for all of the DTS members.
-
Optimal setting for radio mode
I have an Airport Extreme Base Station and an Airport Express; the latter extending the former. Apple's PDF on the subject made this relatively easy to do. However, my family has two Macs, a Macbook 802.11g and an iMac 802.11n.
Is there an optimal setting of the Airport Utility's Wireless "Radio Mode" for both of these machines that doesn't involve buying more Airports?I would recommend using the "802.11n only (5 GHz) - 802.11b/g/n" setting for Radio Mode to support your two Macs. Note: You may need to hold down the Option key BEFORE making a selection in order to get the additional mode choices.
-
Best setting for Canon VIXIA HF R21 for YouTube
I am looking for the best or most optimal setting for shooting video with my Canon VIXIA HF R21 camera.
I am using a MacBook Pro laptop and Adobe Premiere Elements 10.
My goal is to produce the best quality videos for a "YouTube" channel. The videos will be for a teenage car show shot outdoors and narrated.
The current DEFAULT settings are:
Recording Mode = SP, 1440x1080
Bitrate = 7Mbps
Codec = MPEG4-AVC/H.264
Frame Rate = 60i
Digital Image stabilization = standard
Here are possible "alternative selections" - please let me know if any of these settings are better than the Default settings for YouTube.
Change Recording Mode to
1. MXP = 1920x1080 @ 24Mbps
2. FXP = 1920x1080 @ 17Mbps
3. XP+ = 1440x1080 @ 12 Mbps
Change Frame Rate to
1. PF30
Digital image stabilization
1. Dynamic
2. Standard
3. Off
I will use Adobe Premiere Elemebnts 10 to upload the final edited video - what setting are the best upload compression settings if I want to upload for the ebst quality?BillFlorida, I'm not sure what you're not getting here. Have you tried any of the things we've suggested?
There are two settings you need to worry about. One is your PROJECT setting. That must match your original footage, as I've indicated. If it doesn't, you're going to get poor performance and video quality.
The second setting is your OUTPUT setting. That's the only setting YouTube is interested in. If you use the output settings I linked you to above -- working from a project that's been set up properly -- you will get excellent YouTube results.
Try it. You'll see.
You'll also see, if you experiment a bit, that there's virtually no difference in the quality of FXP and MXP -- especially if you're going to display your video online. The important thing is that your PROJECT settings match that 1920x1080 AVCHD footage and that you use the YouTube output settings recommended in the FAQs.
Try it. You will see. -
How to know the optimal Degree of Parallelism for my database?
I have an important application on my databae (Oracle 10,2,0) and the box has 4 CPU. All the tables are not partitioned. Should I set the parallel degree by myself?
How to know the optimal Degree of Parallelism for my database?As far as I am concerned there is no optimal degree of parallism at the database level. The optimal value varies by query based on the plan in use. It may change over time.
It is not that difficult to overuse the PQO and end up harming overall database performance. PQO is a brute force methology and should be applied carefully. Otherwise you end up with inconsisten results.
You can let Oracle manage it, or you can manage it on the statement level via hints. I do not like specifying degrees of parallelism at the object level. As I said, no two queries are exactly alike and what is right for one query against a table may not be right for another query against the table.
If in doubt set up the system to let Oracle manage it. if what you are really asking is how many PQO sessions to allocate then look at your Statspack or AWR reports and judge your system load. Monitor v$px_session and v$pq_slave to see how much activity these views show.
IMHO -- Mark D Powell -- -
Issues with "Higher performance" setting on Macbook Pro with external monitor
Hi all,
I rarely used the "Higher performance" setting in the Energy Saver pane of System Preferences.
I use my MB connected with an external monitor, with its own screen closed. I tried to switch that setting on and the external monitor seems to repeatedly turn off and on. This strange behaviour vanishes if I open the MB's screen, using it in a double monitor configuration.
Did anyone hear about some similar problem?
p.s.: I don't know if this is the exact location for this thread, any suggestions is welcomeIt was set to 1080p, 1920x1080 did not show up as an option (even when holding the option key). 1080p should be equivlant. As as experiment I grabbed another monitor that was not being used. It is a 22 inch LG with a maximum display of 1920x1080, and its currently set to 1920x1080. The issue is a little different, was not detected as a TV this time, but the screen still looks blurry. There may be some improvement but not much.
-
Best setting for gif animations
what is the optimal setting for animated gifs
Depends on your style and your training as an animator, and
what your project goals are: straight animation? interactivity?
e-cards? Traditional animators used to drawing frame-by-frame with
exposure sheets will feel more comfortable with Toon Boom Studio
(www.toonboom.com). It has far superior drawing tools and color
pallette management and timeline management than Flash. But if
you're used to Illustrator, that may make no difference to you. It
appears that the upcoming Flash CS4 will narrow the gap, as the
interface appears to include enhancements that will be more
familiar to traditional animators, not to mention IK, which Toon
Boom Studio doesn't have (the highly expensive Toon Boom Solo does
have it).
That said, I think Flash is a great program for creating
cartoon animation. You may not think you're interested in
ActionScript programming at this point. I use Flash mainly for
cartoon animation and I resisted ActionScript for a few years. But
once you get used to Flash as an animation tool, and then decide to
dip your little toe into ActionScript, you'll find out what an
astounding tool it is, even for traditional animation, enabling you
to do things impossible in more traditional-oriented programs. Plus
now Flash and Illustrator are fairly well integrated, so you'll be
able to import and export between the two.
On balance, I'd go for Flash. -
Oracle 9.2.0.3.0 patch set for windows - questions
Hi!
I have for that some questions. I have oracle 9.2.0.1.0 on a windows 2003 server running with one local database - no real apllication cluster.
I don't understand which instructions I have to make for oracle upgrade?
Must I also make the 3. instruction? Connect to databse prior and drop?
Must I also make the post install action for my database?
(startup migrate ...)
what means at post install actions the following:
'execute these post install actions only if you have one or more databases associated to the upgraded $oracle_homee.
My database path is e:\oracle\oradata\database my oracle path is e:\oracle\ora92 have I in my case make the post install actions?
Thanks for help!
1. When applying this patchset on Windows NT, XP or Windows 2000, you must log onto the system as a user with Administrative privileges
(e.g. as a user which is a member of the local Administrators group).
2. Unzip the downloaded patch set file into a new directory.
3. Drop the xmlconcat function by running the following commands:
drop public synonym xmlconcat;
drop function xmlconcat;
4. Shut down any existing Oracle Server instances in the ORACLE_HOME to be patched with normal or immediate priority. i.e.: Shutdown all instances (cleanly). Stop the database services, and stop all listener, agent and other Oracle processes running in or against the ORACLE_HOME to be installed into.
5. Perform the following pre-install steps for Real Application Clusters (RAC) enviroments only:
IN ADDITION TO STOPPING ALL ORACLE SERVICES IN THE ORACLE_HOME TO BE UPGRADED:
If you are running in a RAC enviroment, stop the following Windows service(s)/devices(s) using the Net Stop command:
Stop the Windows service named "OracleGSDService"
Stop the Windows service named "OracleCMService9i".
Stop the Windows device named "OraFenceService" (This Device does NOT show up as a Service in the Control Panel).
For Example:
C:\>net stop OracleGSDService
C:\>net stop OracleCMService9i
C:\>net stop OraFenceService
Once the above steps have been performed on all the nodes of the cluster, you should then be able to upgrade the OSD components to the 9.2.0.3 versions. This can be accomplished by...
xcopy all the files in the \Disk1\preinstall_rac\osd\*.* directory to the directory in which the OSD components were originally installed on all nodes(typically %SystemRoot%\system32\osd9i).
Note: You may also be able to look in HKEY_LOCAL_MACHINE\Software\Oracle\OSD9i registry key to determine the directory in which the original OSD components were installed. Now the OSD components can be restarted on all the nodes in the cluster.
C:\>net start OraFenceService
C:\>net start OracleCMService9i
C:\>net start OracleGSDService
After all of the above steps have been completed on all the nodes of the cluster, you are now ready to proceed with the Oracle 9.2.0.3.0 patchset installation.
6. Start the installer:
If the current Installer version is less than 2.2.0.18.0 then download the Installer 2.2.0.18.0 from Oracle Metalink where it can be accessed with the bug number 2878462 and choosing the MS Windows NT/2000 Server platform, then run the setup.exe 2.2.0.18.0 from the C:\Program Files\Oracle\oui\install directory .
7. You may install the Patch Set through either an interactive or a silent installation.
To perform an interactive installation using the Oracle Universal Installer graphical interface:
1. Start the installer from the newly installed OUI 2.2.0.18.0 by running the version of setup.exe located at C:\Program Files\Oracle\oui\install and verify
that the version of the OUI GUI is 2.2.0.18.0 before proceeding.
2. Follow the steps given below within the installer:
1. On the Welcome screen, click the Next button. This will display the File Locations screen.
2. Click the Browse button for the Source... entry field and navigate to the stage directory where you unpacked
the Patch Set tar file.
3. Select the products.jar file. Click the Next button
The products file will be read and the installer will load the product definitions.
The products to be loaded will be displayed (verify ORACLE_HOME setting for this first).
4. Verify the products listed and then click on the Install button.
5. Once the installation has completed successfully, it will display End of Installation. Click on Exit and confirm
to exit the installer.
To perform a silent installation requiring no user intervention:
1. Copy the response file template provided in the response directory where you unpacked the Patch
Set tar file.
2. Edit the values for all fields labeled as "<Value Required>" according to the comments and examples
in the template.
3. From the unzipped patchset installation area, start the installer by running the setup executable passing
as the last argument the full path of the response file template you have edited locally with your own
value of ORACLE_HOME and FROM_LOCATION:
setup.exe -silent -responseFile <full_path_to_your_response_file>
Post Install Actions
Execute these "Post Install Actions" only if you have one or more databases associated to the upgraded $ORACLE_HOME.
Important Notes
1: Java VM and XML DB Requirements
Users who have JVM (Java enabled) or JVM and XDB installed on their 9.2.0.1 databases should make sure that the init.ora parameters shared_pool_size and java_pool_size are each 150 MB or more before running the catpatch.sql upgrade script. Failure to do so could result in an unrecoverable memory failure during running of the script. Please note that JVM and XML DB was shipped as part of the default 9.2.0.1 seed database and will be present unless the user explicitly installed a 9.2.0.1 instance without them.
2: SYSTEM table space
If you have JServer installed in the database, you should check to be sure there is at least 10M of free space in the SYSTEM table space before running these actions.
3: Installing on Cluster Databases
If you are applying this patch set to a cluster database, then set the CLUSTER_DATABASE initialization parameter to false. After the post-install actions are completed, you must set this initialization parameter back to true.
To complete the installation of this patch set, you need to start up each database associated with the upgraded $ORACLE_HOME, start the database listener (e.g., lsnrctl start), login using SQL*Plus (e.g., sqlplus "/ as sysdba"), and run the following commands/scripts in order from $ORACLE_HOME within a MIGRATE session. If you are using the OLAP option, make sure that the database listener is up.
startup migrate
spool patch.log
@rdbms/admin/catpatch.sql
spool off
Review the patch.log file for errors and re-run the catpatch script after correcting any problems
shutdown
startup
This step is optional - it will recompile all invalid PL/SQL packages now rather than when accessed for the first time - you can also use utlrcmp.sql to parallelize this in multiprocessor machines:
@rdbms/admin/utlrp.sql
Execute the following if you use Oracle OLAP option:
alter user olapsys identified by <password> account unlock
connect olapsys/<password>
@cwmlite/admin/cwm2awmd.sql
exit
Execute the following script only if you have version 9.2.0.1.0 of Oracle Internet Directory installed in your ORACLE_HOME. Make sure that the database and database listener are running and that all parameters are specified prior to running the script:
$ORACLE_HOME/bin/oidpatchca.bat
-connect <Connect String>
-lsnrport <Listener Port>
-systempwd <SYSTEM Password>
-odspwd <ODS Password>
-sudn <Super-User DN>
-supwd <Super-User Password>
-dippwd <Password to register DIP Server>
Where:
connect - Database connect string
lsnrport - Database Listener port
systempwd - Password of the database 'SYSTEM' user
odspwd - Password of the database 'ODS' user
sudn - Super-user DN
supwd - Super-user Password
dippwd - New password to register Directory Integration Server. This password must conform to the password policy in the OID server
Execute the following steps only is you are using the RMAN catalog.:
rman catalog <user/passwd@alias>
upgrade catalog;
upgrade catalog;I don't understand which instructions I have to make for oracle upgrade?
-- You have to follow all the instructions mentioned
(Just check for few if's for Cluster enabled, OID and OLAP databases. Perform these steps only if they apply)
Must I also make the 3. instruction? Connect to databse prior and drop?
-- Yes you should do
Must I also make the post install action for my database?
(startup migrate ...)
-- Yes. Do run catpatch.sql .
what means at post install actions the following:
'execute these post install actions only if you have one or more databases associated to the upgraded $oracle_homee.
-- In this step make sure you do step 1
1: Java VM and XML DB Requirements
Otherwise catpatch.sql will fail.
Do 2 and 3 only if applicable
Chandar
My database path is e:\oracle\oradata\database my oracle path is e:\oracle\ora92 have I in my case make the post install actions? -
What is the best Compressor setting for best quality video playback on an iBook g4?
I know the iBook and G4's in general are very outdated today, but I need to ask anyways. I have some video projects in 720p and 1080p in which I have down converted to 480p and also exported to MPEG-2 for DVD (personal wedding videos and videos made for my clients using Final Cut Studio). Anything encoded at most resolutions using h264 won't play on my iBook. Even 480p.
I have about 20 hours of mixed video content that I need it in a format that is suitable for an iPhone 4 and an iBook 12" with a 1.2GHz G4, 1.25GB RAM and I added a 250 WD 5400 IDE hard disk (running 10.5.8 and 10.4.11 for Classic Mode). I know the iBook doesn't seem like the best tool for modern video playback, but I need to figure out which setting will play best with iPhone 4 and iBook so I don't need to make 2 local copies of each video for each device.
The iBook plays best with the original DVD output MPEG-2 file and playing back in QT Pro or VLC... but I already have 180 GB's of MPEG-2 files now and my little HD is almost full. I don't have enough room to convert all the iPhone 4 counterparts. If I use Compressor 3.5.3, what is the optimal setting for iBook and iPhone .m4v or .mp4 files that can play on both devices? So far 720x400-480 widescreen videos @ 29fps works great on my iPhone, Apple TV 2, and other computers but seems to murder my poor iBook if encoded with high profile (and still choppy on simple profiles). 640x480 (adding black matte bars to my videos) plays fine in MPEG-2 but drops frames or goes to black screen if I convert it to mp4 (and looks bad on the iPhone 4 because of the matte). But if I convert on any of the simple profiles, it looks terrible on my iPhone 4 and a blocky on the iBook.
This is the problem leading me to having 2 copies of each video and eating my hard disk space. What is the best video setting for both playback on the iBook and iPhone 4? Can the iBook playback H264 at all in decent resolutionsat all? I don't really want to have a 480p .m4v collection for the iPhone 4 and a MPEG-2 RAW collection just to play the same videos on the iBook.
Any suggestions are greatly appreciated! Thanks!Update: The iBook can play any 480p video and higher if I encode them with DivX and in AVI format. But of course this is not compatible with my iPhone 4. At least I can shrink my library now and get away from the full MPEG-2s. I don't get why I can't use Apple's h264 though. There has to be a setting I am missing. The sample Apple h264 videos from the days of Tiger worked flawlessly on my iBook when it was new so the CPU must be capable of decoding it. I really can't understand this.
Also, since I made my videos in English for my family, I had to create soft subtitles for my wife's Chinese family, and I can't get players like QT with perian or MPlayerX to sync them properly to an AVI encoded with DivX, they only sync well with the iPhone 4 m4v/mp4 formatted files I made. This is a real pickle.
So now I may need three or four copies of each video, LOL. I need to hardcode the subtitles if I want to use AVI to playback on older machines, and keep the mp4 file for the iDevices too, while keeping higher quality h264 videos for my American relatives...
If h264 is compatible with my iBook, what is the proper encoding settings? Must I dramatically lower the settings or frame rate? I can settle on 2 copies of each video that way. One iBook/G4/eMac compatible video that syncs correctly with my srt soft subs, and another version that works well with my iPhone 4 and iPad.
All in all, I will end up with more than 3 or 4 version of each video. On my late G5 dual core I have the full 720-1080p uncompressed master files. On my i5 iMac I have the h264 compressed versions for distribution, and lower versions for my iDevices. Now I need to keep either full MPEG-2 files for the iBook to play, or convert to older formats like DivX AVI for our family's legacy machines. I am running out of hard disk space quick now, LOL.
Is there an easier way? -
Performing filter for field Tax Code (MWSKZ) in the Purchase Order
Hello Experts,
We have to perform a filter for field Tax Code in the purchase order (ME21N / ME22N / ME23N). We've tried to use SH SH_T007A and SSH_T007A with search help exit (e.g. F4_TAXCODE_USER_EXIT) but it is not working. The ABAP programmer has performed a debug and the standard does not check any line code in this function (the ABAP programmer has set a breakpoint into function F4_TAXCODE_USER_EXIT after assigning it for mentioned search helps)... it sounds like this program / search help is not called by standard program of ME2* transactions...
I've tried to look for some other object and other function called FI_F4_MWSKZ has been found... I've set a breakpoint there and when I open the search help for field tax code into transaction ME21N it works... but as I could see this function FI_F4_MWSKZ is a standard one which we can not change...
Have you ever had the same problem?
We are currently in the SAP 4.6C version. I've found lots of OSS notes but only valid for 6.0.
Maybe someone can help me on that.
Best regards,
Nilmarhi,
goto gs01 transaction,give some name to ur step.
give the table name and field name.
then u can create a specific value set for that field.
save.
now u can use this set to define conditions for ur fields in obbh transaction. -
Performance Tuning for a report
Hi,
We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.
To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.
The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.
select matnr from mara into table t_mara.
select werks from t001w into corresponding fields of table t_t001w .
select matnr werks from marc into corresponding fields of table t_marc.
loop at str_table into wa_table.
if not wa_table-partnumber is initial.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = wa_table-partnumber
IMPORTING
OUTPUT = wa_table-partnumber
endif.
clear wa_message.
read table t_mara into wa_mara with key matnr = wa_table-partnumber.
if sy-subrc is not initial.
concatenate 'material ' wa_table-partnumber ' doesnot exists'
into wa_message.
append wa_message to t_message.
endif.
read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'plant ' wa_table-HostLocID ' doesnot exists' into
wa_message.
append wa_message to t_message.
else.
case wa_t001w-werks.
when 'DE40'
or 'DE42'
or 'DE44'
or 'CN61'
or 'US62'
or 'SG70'
or 'FI40'
read table t_marc into wa_marc with key matnr = wa_table-partnumber
werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'material' wa_table-partnumber ' not extended to plant'
wa_table-HostLocID into wa_message.
append wa_message to t_message.
endif.
when others.
concatenate 'plant ' wa_table-HostLocID ' not allowed'
into wa_message.
append wa_message to t_message.
endcase.
endif.
if wa_message is initial.
data: wa_headdata type BAPIMATHEAD,
wa_PLANTDATA type BAPI_MARC,
wa_PLANTDATAx type BAPI_MARCX.
wa_headdata-MATERIAL = wa_table-PartNumber.
wa_PLANTDATA-plant = wa_table-HostLocID.
wa_PLANTDATAX-plant = wa_table-HostLocID.
wa_PLANTDATA-REORDER_PT = wa_table-ROP.
wa_PLANTDATAX-REORDER_PT = 'X'.
wa_plantdata-ROUND_VAL = wa_table-EOQ.
wa_plantdatax-round_val = 'X'.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
HEADDATA = wa_headdata
PLANTDATA = wa_PLANTDATA
PLANTDATAX = wa_PLANTDATAX
IMPORTING
RETURN = t_bapiret
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
write t_bapiret-message.
endif.
clear: wa_mara, wa_t001w, wa_marc.
endloop.
loop at t_message into wa_message.
write wa_message.
endloop.
Thanks in advance.
Peter
Edited by: kishan P on Sep 17, 2010 4:50 PMHi Peter,
I would suggest few changes in your code. Please refer below procedure to optimize the code.
Steps:
Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.
Please run extended program check or code inspector to remove any errors and warnings.
Few code changes that i would suggest in your code
For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)
Also put an initial check if str_table[] is not initial before you execute the loop.
where ever you have used read table. Please sort these tables and use binary search.
Please clear the work areas after every append statment.
As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.
Hope this helps to resolve your problem.
Have a nice day
Thanks -
How to define CPU sets for different hardware cores?
We're doing a small benchmarking research on parallel benefits of Niagaras, and as one part of
the research, we want to find out whether there are performance differences between hardware
CPU cores and strands within a core. From theory, only one strand of a core is executing at any
given moment, others are "parked" while waiting for IO, RAM, etc. So it may be possible to see
some difference between a program running with 4-processor "pset"s, i.e. 4 strands of one core
and 4 separate cores.
While I can use psrset or poolcfg/pooladm to create and bind actual processor sets consisting
of a number of processors, I have some trouble determining which of the 32 CPU "id numbers"
belong to which hardware core.
On a side note, an X4600 server with two boards lists 4 "Dual-Core AMD Opteron(tm) Processor
8218 CPU #" entries in prtdiag, but 8 "cpu (driver not attached)" entries in prtconf, and
Solaris recognizes 8 processors in pooladm. Again, there's no clue which of these 8 processor
cores belongs to which socket, and this information could be important (or at least interesting)
for SMP vs. NUMA comparison within otherwise the same platform.
So far the nearest ideas I got were from some blog posts about pooladm which suggested to
make a CPU set for a worker zone with CPU IDs 0-28, and moving hardware interrupts to CPU
IDs 29-31. I could not determine whether these are single strands of separate cores, or 3 of 4
strands on a single core, or some random 3 strands how-ever Solaris pleases to affine them?
Is this a list defined somewhere (i.e. IDs 0-3 belong to core 0, IDs 4-7 belong to core 1 and so
on, according to Document X) or can this be determined at runtime (prtdiag/prtconf)?
Is this a fixed list or can the "strand number-CPU core" relations change over time/reboots?
Perhaps, does Solaris or underlying hardware deliberately hide this knowledge from the OS
administrators/users (if so, what is the rationale)?
Finally, am I correct to believe I can place specific CPU IDs into specific psets via pooladm?
Looking at /etc/pooladm.conf I think this is true (the default pool lists all CPU IDs of the
system), but wanted some solid confirmation :)
Thanks for any ideas,
//JimA Sun Fire E2900 with 4 dual-core UltraSPARC-IV chips:
# prtdiag
System Configuration: Sun Microsystems sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 16GB
====================================== CPUs ======================================
E$ CPU CPU
CPU Freq Size Implementation Mask Status Location
0,512 1350 MHz 16MB SUNW,UltraSPARC-IV 3.1 on-line SB0/P0
1,513 1350 MHz 16MB SUNW,UltraSPARC-IV 3.1 on-line SB0/P1
2,514 1350 MHz 16MB SUNW,UltraSPARC-IV 3.1 on-line SB0/P2
3,515 1350 MHz 16MB SUNW,UltraSPARC-IV 3.1 on-line SB0/P3
# psrinfo -p -v
The physical processor has 2 virtual processors (0 512)
UltraSPARC-IV (portid 0 impl 0x18 ver 0x31 clock 1350 MHz)
The physical processor has 2 virtual processors (1 513)
UltraSPARC-IV (portid 1 impl 0x18 ver 0x31 clock 1350 MHz)
The physical processor has 2 virtual processors (2 514)
UltraSPARC-IV (portid 2 impl 0x18 ver 0x31 clock 1350 MHz)
The physical processor has 2 virtual processors (3 515)
UltraSPARC-IV (portid 3 impl 0x18 ver 0x31 clock 1350 MHz)
# prtconf | grep cpu
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)A 6-core single-T1 Sun Fire T2000:
# prtdiag
System Configuration: Sun Microsystems sun4v Sun Fire T200
System clock frequency: 200 MHz
Memory size: 8184 Megabytes
========================= CPUs ===============================================
CPU CPU
Location CPU Freq Implementation Mask
MB/CMP0/P0 0 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P1 1 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P2 2 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P3 3 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P4 4 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P5 5 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P6 6 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P7 7 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P8 8 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P9 9 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P10 10 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P11 11 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P12 12 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P13 13 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P14 14 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P15 15 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P16 16 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P17 17 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P18 18 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P19 19 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P20 20 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P21 21 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P22 22 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P23 23 1000 MHz SUNW,UltraSPARC-T1
# psrinfo -p -v
The physical processor has 24 virtual processors (0-23)
UltraSPARC-T1 (cpuid 0 clock 1000 MHz)
# prtconf | grep cpu
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)
cpu (driver not attached)//Jim
Maybe you are looking for
-
Don't like calendar set up with IOS7. Can't see list of actual events on calendar as was the case with IOS6. If you have a dental appointment in October but can't remember the day ,you need to tap each day to see what us on whereas before you could j
-
Using Keynote '09 on an iMac OS 10.8.4. Two of my students saved a Keynote file then the next day when trying to open received an error message- The document "4b Molly.key" could not be read. The document does not have a valid format. The students ar
-
A simple request if anyone can have some idea. I find myself NOT playing much music due to the numbers of plug Ins and attachements that come with them in GB. Would like a complete wipe out with a brand new virgin, yet updated GB download devoid of a
-
Hi, I have a brand-new-just-out-of-the-box Macbook (First Mac to own) I have the 100 gig(92.84) hard drive, and without loading a bit on it it says I am already using 18.37 gigs of space. Can this be true? Does OS X really come pre-packed with 18 gig
-
OBIEE 11.1.1.5 Installation - Error Creating BI Server System Components
While installing OBIEE 11.1.1.5 on windows 32bit, during the configuration, error occurs at Creating BI Server System Components step. The log is as follows: Executing Task: Creating BI Server System Components oracle.as.config.ProvisionException: Fa