Systemd-networkd service takes too long on Compaq Mini Cq10-120LA
Hi, I'm Oppen. I've been using Arch for around 6 months now, and I'm until now very pleased with it.
However, I've noticed something that I find odd. I don't consider it necessarily a problem, but it sparks my curiosity.
I'm running Arch in three different computers.
One of them is relatively new (well, new, where I'm from, means a 2012 model), an HP Envy m6, running on an AMD A10.
Another one is relatively old, a Via K8M800, with an Athlon64 x2.
Both of them present similar boot times in all of the services I'd set up, which are pretty much the same on all of my boxes.
Then, I have this netbook. It also shows similar times for all services, except for networkd, which takes 10-15 times more than in the other two boxes, with over 5 seconds to finish. I'd like to understand why this is happening, and if possible, to fix it so the boot time will shrink.
So, I'll continue explaining which are my guesses, any criticism, corrections or ideas are welcome:
- since my other two boxes run on AMD64, both of them with 64 bits Arch, and this one is an Atom running on i686 Arch, one lead would point to a lack of specific optimizations. But if that's the case, it remains unclear why it only happens with networkd.
- another guess would point to actually the handshaking taking longer, either because the driver or hardware for wifi is slower or something related to either wpa_supplicant or networkd.
I tried looking in the forums and google to see if there were any similar issue reported, but I had no luck.
Thanks in advance for any suggestions or ideas,
Mario.
For systemd-networkd, last boot, I get this log.
ene 29 19:46:41 westeroos systemd-networkd[213]: rtnl: received address for nonexistent link (1), ignoring
ene 29 19:46:41 westeroos systemd-networkd[213]: rtnl: received address for nonexistent link (1), ignoring
ene 29 19:46:41 westeroos systemd-networkd[213]: wlan0 : link configured
ene 29 19:46:43 westeroos systemd-networkd[213]: wlan0 : gained carrier
I can't make sense of the first two lines, but otherwise it doesn't seem to point out any problems.
For [email protected], I get this:
ene 29 08:02:29 westeroos wpa_supplicant[207]: Successfully initialized wpa_supplicant
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: SME: Trying to authenticate with e0:24:7f:e0:57:4d (SSID='5744' freq=2422 MHz)
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: Trying to associate with e0:24:7f:e0:57:4d (SSID='5744' freq=2422 MHz)
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: Associated with e0:24:7f:e0:57:4d
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: WPA: Key negotiation completed with e0:24:7f:e0:57:4d [PTK=CCMP GTK=CCMP]
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: CTRL-EVENT-CONNECTED - Connection to e0:24:7f:e0:57:4d completed [id=0 id_str=]
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: CTRL-EVENT-DISCONNECTED bssid=e0:24:7f:e0:57:4d reason=3 locally_generated=1
ene 29 08:02:29 westeroos wpa_supplicant[207]: wlan0: CTRL-EVENT-TERMINATING
Last edited by Oppen (2015-01-30 00:17:09)
Similar Messages
-
Drill Through reports takes too long
Hi all,
I need some suggestions/help with our drill through reports. We are using Hyperion 11.1.1.3 and the cube is ASO.
We have drill through reports set up in Essbase studio for drilling down from Essbase to Oracle database. It takes too long (like 30 mins for fetching 1000 records) and the query is simple.
What are the changes that we can do to bring down this time. Please advice.
Thanks.Hi Glenn,
We tried optimizing the drill through SQL query but actually when we run the directly in TOAD it takes *23 secs* but when we do drill through on the same intersection
it took more than 25 mins. Following is our query structure :
(SELECT *
FROM "Table A" cp_594
INNER JOIN "Table B" cp_595 ON (cp_594.key = cp_595.key)
WHERE (Upper(cp_595.*"Dim1"*) in (select Upper(CHILD) from (SELECT * FROM DIM_TABLE_1 where CUBE = 'ALL') WHERE CONNECT_BY_ISLEAF = 1 START WITH PARENT = $$Dim1$$ CONNECT BY PRIOR CHILD = PARENT UNION ALL select Upper(CHILD) from DIM_TABLE_1 where CUBE = 'ALL' AND REPLACE('GL_'||CHILD, 'GL_IC_', 'IC_') = $$Dim1$$))
And ----same for 5 more dimensions
Can you suggest some improvement ? Please advice.
Thanks -
11gR2:crsctl, srvctl commands takes too long to respond
Hi,
I have successfully configured *11gR2 two node RAC on ASM on Win 2008 64bit.*
Everything work very well like connecting to database, querying database. Node restart also takes acceptable time to go down & restart the clusterware & database.
But when I execute crsctl status resource -t or srvctl status database -d db_name commands from any node takes 10-15min to give output.
They give output & everything completes successfully but takes too long to respond.
The questions are:
- If everything works fine then why crsctl, srvctl takes too long to respond?
- what could be blocking these command to gather clusterware and database status on all nodes?
- what additional info would be helpful that I can provide?dag wrote:
I dont have this issue either. are you auto starting mpd?
that time is how long it is up in other words you opened it then closed it at that time
I'm not sure if you are referring to me or not, but in my screenshot I am timing the lag in ncmpcpp by pressing 'q' in the terminal during the delay, so it quits ncmpcpp immediately after the lag. The lag is longer for ncmpc because the interface loads before the program connects to mpd, so I have to stop it manually immediatly after it connects.
WonderWoofy wrote:I don't have this problem... if you are starting it as a systemd user service, maybe there is relevant information in the journal.
The journal did not reveal anything relevant sadly. I have now tried launching mpd without systemd, and the lag remains. I have also noticed that a small mpd programming project I am writing also experiences the same lag when it tries to connect to mpd. -
Query designer takes too long to save a query
Hi dear SDN friends.
I´m working with query designer in BI 7 and sometimes it takes too long to save a query, about ten minutes. Sometimes it never ends saving and some other times it saves the same query in 1 minute.
Can anybody please give an advice about this behavior of the query designer?
We have recently update BI to sp18. In query designer I have sp5 revision 529
Best regards,
Karim ReyesHello Karim,
I would suggest testing this again in the latest Frontend Patch available (FEP 602). In FEPs 600, 601, & 602 there were some performance and stability improvements made which may correct your issue. If the issue persists, I would suggest then opening a Customer Message via Service Marketplace.
It can be downloaded from:
http://service.sap.com/swdc
u2192Download
u2192Support Packages and Patches
u2192Entry by Application Group
u2192SAP Frontend Components
u2192BI ADDON FOR SAPGUI
u2192 BI 7.0 ADDON FOR SAPGUI 7.10
u2192 BI 7.0 ADDON FOR SAPGUI 7.10
u2192Win32
See SAP Note 1085218 for planned FEP releases.
I hope that helps.
Regards,
Tanner Spaulding
SAP NetWeaver RIG Americas, BI -
Business Rules Project Takes Too Long to Open
Does anyone know why it takes too long (~3-5 minutes) to open/edit a security project definded for assigning business rules to planning application forms? We are using Hyperion v11.1.1.3.0. Essbase is on Windows server, Shared Services on Solaris 10 Unix. Even before we migrated Essbase to Windows to gain better performance running calcs, opening projects using EAS has always been very slow to open. Please advice if there is a way to improve performance on this.
Clear Cookies & Cache
* https://support.mozilla.com/en-US/kb/Template:clearCookiesCache
Clear the Network Cache
* https://support.mozilla.com/en-US/kb/How%20to%20clear%20the%20cache#w_clear-the-cache
Firefox takes a long time to start up
* https://support.mozilla.com/en-US/kb/firefox-takes-long-time-start-up
Check and tell if its working. -
[SOLVED] initramfs takes too long to load
Using systemd-analyze I found out that initramfs takes too long to load:
463ms (kernel) + 11875ms (initramfs) + 6014ms (userspace) = 18353ms
My HOOKS array in mkinitcpio.conf is the following:
HOOKS="base udev autodetect modconf block encrypt lvm2 filesystems usbinput fsck"
I suspect that the long loading time of initramfs is caused by partitions decryption (I am using dm-crypt / LUKS on top of LVM).
Is there any tool that can report loading times of HOOKS seperately, just like systemd-analyze plot does for userspace?
Last edited by nasosnik (2013-01-21 14:45:28)cfr wrote:
In what sense is it "too long"?
I'm just wondering: suppose that you find out that it is because you are using encryption. Would you then switch to a non-encrypted system? Would you make better use of the extra seconds you might save on those rare occasions when you reboot? Even if you reboot twice a day, you might save what? Suppose you would even save 5s per boot. That will give you a whole extra 1 minute and 10 seconds a week. Assuming you don't multitask. Obviously if you multitask, the gain will be less. Would that make it worth risking the security of your data?
EDIT: I didn't mean this to sound as confrontational as it does now I read it back. It just always puzzles me that people are so concerned about shaving a few milliseconds here and there. I always hope that they put the time they save to good use but then I realise that the time they spent shaving the milliseconds off will obviously outstrip the time saved.
I really don't care about the boot time because of the reasons you have already mentioned. I just want to figure out if there is any misconfiguration. I am just investigating why initramfs takes significant longer to load compared with my desktop Arch installation (non-encrypted) 1316ms for initramfs. My desktop has a Pentium 4 CPU and laptop has a quad-core i7.
roentgen wrote:11875ms (initramfs) means the time it takes you to type the password.
systemd-analyze is not counting the time is spent to type the password. -
Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy. It is a Macbook Pro osx 10.6.8 (late 2010).
Almost certainly none of the above! Try each of the following in this order:
Select 'Reset Safari' from the Safari menu.
Close down Safari; move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false -
Accessing BKPF table takes too long
Hi,
Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
I'm using this:
select bukrs gjahr belnr budat blart
into corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and monat in so_monat.
The report is taking too long and is eating up a lot of resources.
Any helpful advice is highly appreciated. Thanks!Hi max,
I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat in so_budat.
I also tried accessing the table per day, but it didn't worked too...
while so_budat-low le so_budat-high.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat eq so_budat-low.
so_budat-low = so_budat-low + 1.
endwhile.
I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period? -
Hi!
I am in troubble
following is the query
SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
pldesc, i.pmempno, pmname, i.empid, empname
FROM inv_reg i,
cat_reg c,
sub_cat_reg s,
gen_desc_reg g,
ploc p,
province r,
pmaster m,
iemp_reg e
WHERE i.sub_cat_id = s.sub_cat_id
AND i.cat_id = s.cat_id
AND s.cat_id = c.cat_id
AND i.bl_id = g.gen_id
AND i.cur_loc = p.plcode
AND p.prvcode = r.prvcode
AND i.pmempno = m.pmempno(+)
AND i.empid = e.empid(+)
&wc
order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
and this query returns 32000 records
when i run this query on reports 10g
then it takes 10 to 20 minuts to generate report
how can i optimize it...?Hi Waqas Attari
Pls study & try this ....
When your query takes too long ...
hope it helps....
Regards,
Abdetu... -
OPM process execution process parameters takes too long time to complete
PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
thanks in advance.
regds,
ShaileshGenerally the slowest part of the process is in the extraction itself...
Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
In BW also check in your SM21 for database errors or delays...
Just some ideas... -
Web application deployment takes too long?
Hi All,
We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
Thanks in advance,
John
+####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
+####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
Edited by: john wang on Feb 29, 2012 10:36 AM
Edited by: john wang on Feb 29, 2012 10:37 AM
Edited by: john wang on Feb 29, 2012 10:38 AMHi John,
There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
I don't think a 1hour deployment is normal, it should be much more faster.
Since you are using 10.3.5, I suggesto you to install the corresponding patch:
1. Download patch 10118941p10118941_1035_Generic.zip
2. Uncompress the file p10118941_1035_Generic.zip
3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
6. Select "Work Offline" mode.
7. Go to File->Preferences, and select "Patch Download Directory".
8. Click "Manage Patches" on the right panel.
9. You will see the patch in the panel below (Downloaded Patches)
10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
12. Restart servers.
Hope this helps.
Thanks,
Cris -
RPURMP00 program takes too long
Hi Guys,
Need some help on this one guys. Not getting any where with this issue.
I am running RPURMP00 ( Program to Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
The long text message is No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 and Job cancelled after system exception ERROR_MESSAGE
I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
It short dumped and here is the st22 message and source code extract.
----Message -
" Time limit exceeded ".
"The program "RPURMP00" has exceeded the maximum permitted runtime without
Interruption and has therefore been terminated."
----Source code extract -
Include RPURMP02
172 &----
173 *& Form get_advice_info
174 &----
175 * text
176 ----
177 * --> p1 text
178 * <-- p2 text
179 ----
180 FORM get_advice_info .
181
182 * get information for advice form only if vendor sub-group and
183 * employee detail is maintained
184 IF ( NOT t51rh-lifsg IS INITIAL ) AND
185 ( NOT t51rh-hrper IS INITIAL ).
186
187 * get remittance items employee number
188 SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658
189 * get payroll seqno determined by PERNR and RDATN
>>>>> SELECT * FROM t51r8 WHERE pernr = t51r4-pernr
191 AND rdatn = t51r5-rdatn
192 ORDER BY PRIMARY KEY. "#EC CI_GENBUFF
193 EXIT.
194 ENDSELECT.
Has anyone ever come across this situation? Any input from anyone on this?
Regards.
CJHi,
What is your SAP version?
Have you checked if some OSS notes is there on performance.
Regards,
Atish -
AME CS6 rendering with AE and Pr takes too long
Hi Guys,
Need some help here. i have rendered a 30 secs mp4 video with 1920 x 1080 HD format 25 frames w/o scripting in AME for 4 hours!
Why does it take too long? I have rendered a 2 minute video with same format w/ scripting but only spare less than 30 minutes for rendering.
Im using After Effects and Premium Pro both CS6 and using Dynamic Link in AME.
What seems to be wrong in my current settings?
Any help would be appreciated.
Thanks!This may be a waste of time, but it won't take a minute and is something you should always do whenever things go strangely wrong ............ trash the preferences, assuming you haven't done it already.
Many weird things happen as a result of corrupt preferences which can create a vast range of different symptoms, so whenever FCP X stops working properly in any way, trashing the preferences should be the first thing you do using this free app.
http://www.digitalrebellion.com/prefman/
Shut down FCP X, open PreferenceManager and in the window that appears:-
1. Ensure that only FCP X is selected.
2. Click Trash
The job is done instantly and you can re-open FCP X.
There is absolutely no danger in trashing preferences and you can do it as often as you like.
The preferences are kept separately from FCP X and if there aren't any when FCP X opens it automatically creates new ones . . . instantly. -
My Query takes too long ...
Hi ,
Env , DB 10G , O/S Linux Redhat , My DB size is about 80G
My query takes too long , about 5 days to get results , can you please help to rewrite this query in a better way ,
declare
x number;
y date;
START_DATE DATE;
MDN VARCHAR2(12);
TOPUP VARCHAR2(50);
begin
for first_bundle in
select min(date_time_of_event) date_time_of_event ,account_identifier ,top_up_profile_name
from bundlepur
where account_profile='Basic'
AND account_identifier='665004664'
and in_service_result_indicator=0
and network_cause_result_indicator=0
and DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
group by account_identifier,top_up_profile_name
order by date_time_of_event
loop
select sum(units_per_tariff_rum2) ,max(date_time_of_event)
into x,y
from OLD_LTE_CDR
where account_identifier=(select first_bundle.account_identifier from dual)
and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
and -- no more than a month
date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
and -- finished his bundle then buy a new one
date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
FROM OLD_LTE_CDR
WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
AND IN_SERVICE_RESULT_INDICATOR=26);
select first_bundle.account_identifier ,first_bundle.top_up_profile_name
,FIRST_BUNDLE.date_time_of_event
INTO MDN,TOPUP,START_DATE
from dual;
insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
end loop;
COMMIT;
end;> where account_identifier=(select first_bundle.account_identifier from dual)
Why are you doing this? It's a completely unnecessary subquery.
Just do this:
where account_identifier = first_bundle.account_identifier
Same for all your other FROM DUAL subqueries. Get rid of them.
More importantly, don't use a cursor for loop. Just write one big INSERT statement that does what you want. -
Sql Query takes too long to enter into the first line
Hi Friends,
I am using SQLServer 2008. I am running the query for fetching the data from database. when i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second) to enter into first
line of the stored procedure. After its enter into the first statement of the SP, its fetching the data within a second. I think there is no problem with Sqlquery. Kindly let me know if you know the reason behind this.
Sample Example:
Create Sp Sp_Name
as
Begin
print Getdate()
Sql statements for fetching datas
Print Getdate()
End
In the above example, there is no difference between first date and second date.
Please help me to trouble shooting this problem.
Thanks & Regards,
Rajkumar.Ri am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second)
Additional to Manoj: A
DBCC FREEPROCCACHE clears the procedure cache, so all store procedure must be newly compilied on the first call.
Olaf Helper
[ Blog] [ Xing] [ MVP]
Maybe you are looking for
-
I am tired of the glare. What can I do???
-
How to change Vendor text in shopping cart using BADI/Function module
Hi If any could help me out that i want to change vendor text using BADI/FMs. Using BADI" bbp_catalog_transfer" i dint find any parameter for vendor text. please let me know if there any idea to resolve the problem
-
Need help in the transfer/upload iWeb photos back to iPhoto
I have created an iWeb page with numerous family and other photos taken from iPhoto. Since that point, I have lost those files in iPhoto due to some operator error. ( no i didnt back up first, but thats set up now ) Anyway, I am wondering if I can up
-
Hello, I can't open my iMovie project anymore. The projects above it are some squares next to each other, but this one is just a thin stroke. It keeps asking me to change the name, and I can't open it. Please help! Spend alot of time in this!
-
Corrupt Ipod, What is it exactly?
I plugged in my ipod wanting to see if the auto open feature actually worked. The feature worked but itunes called my ipod corrupt. This is a first. I was given 2 options restore or reconnect. Neither option has worked. A 1488 error showed up saying