Write Relation in HRPAD00INFTY
Hi everybody,
i try to write a relation between a BU and a person in an implementation of BAdI HRPD00INFTY in method in_update.
I tried different ways like FM
RH_WRITE_RELATION OR
RH_INSERT_INFTY(_DIRECT) or
FM RH_PNNNN_MAINTAIN
in Buffer modes and in dialog modes. All FM come up with sy-subrc = 0 and the RH_PNNNN_MAINTAIN shows all dialog steps but nothing is on the database.
Does anyone can help?
Hi Tobias,
I remember i used FM 'RH_UPDATE_DATABASE' after the call for RH_INSERT_INFTY for similar issue.
Use FM RH_UPDATE_DATABASE, if problem still persists let me know.
Check the documentation of Fm to how to use it.
Regards,
Shrinivas
Similar Messages
-
How can I write relative path in File, ImageIcon objects
I have downloaded examples from
http://java.sun.com/docs/books/tutorial/uiswing/components/example-swing/index.html
but for some reason images are invisible until I write absolute paths?(I use windows) How can I write relative path in File, ImageIcon objects?Hello;
I've found a lot of complaints about this through the Forte forum as well as on Dejanews. I'm just curious as to whether any of you ended up finding a solution to this problem.
There was one posting about setting up the subdirectory as a URL rather than a String and then using that as the source of the ImageIcon ... but this didn't work for me.
Any help whatsoever would be appreciated.
Thanks! -
Read from one file ... and write
I had written below code to read from a file
which has input in below format...
$ filla
% dillla
I wrote a code to read from the below file and I want to write into another file...
in another format...like
filla, $
dilla, %
I wrote as below... but my code doesn't work....i commented most of the code becoz I doubt ....
I read the file...line by line...
each line has two words one is special characters($ or %) and the other is some name... separated by space..
I stored both in different arrays...and thought to write them in output file...
import java.io.*;
public class data
public static void main(String args[])
int kMaxLines = 60000;
String[][] valuePairs = new String[2][kMaxLines];
String symbol [] = new String[kMaxLines];
String name [] = new String[kMaxLines];
int k = 0;
try
FileInputStream fstream = new FileInputStream("c:/inputs.txt");
DataInputStream datastream = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(datastream));
//FileWriter output = new FileWriter("c:/output.txt");
//BufferedWriter bw = new BufferedWriter(output);
String names;
String strLine;
int line = 0;
while ((strLine = br.readLine()) != null)
String temp_strs[] = strLine.split(" ");
for (int j = 0; j < 2; j++)
valuePairs[j][line] = temp_strs[j];
//System.out.println(temp_strs[0] + " " + temp_strs[1]);
symbol[k]= temp_strs[0];
name[k]=temp_strs[1];
line++;
k++;
for (k=0; k < kMaxLines; k ++)
System.out.println(symbol[k]+ "" + name[k]); // when I print it is giving null value...
} // so what I do to get those values into global array...
datastream.close();
// to write to file
//String names_str;
//bw.write("@relation train");
//for(k=0; k<kMaxLines; k++)
// names_str = names+""+name[k];
//bw.write("@attribute names" + " {"+names_str+"}");
//bw.write("@attribute class {+,-}");
//bw.write("@data"+"\n");
// for (k=0;k<kMaxLines; k++)
//bw.write("name[k]+","+symbol[k]");
catch (Exception e)
//file not found exception...
e.printStackTrace();
Why the multiple arrays?
I'd just read it into List<String[]>... an extensible collection of "the fields on each line"... then you just write them out as required.
second-field + comma + first-field.
I suggest your next move should be to take a backup copy of your current class then rip-all that commented-out code out of it, and reformat it properly... starting from a clean(er) slate... then try swapping over to a List<String[]> instead of that cumbersome String[][] matrix.
Try that... if you get stuckeroonied then don't be afraid to ask again.
Cheers. Keith. -
Hello everyone!
I'm a newbie.
The problem is the column of [DiskRead and Write]&[Network Received and Transmitted] has nodata.The others are OK(CPU,Memory etc.). I already set the base rate for [DiskRead and Write]&[Network Received and Transmitted] and I do some operations like copy files from one to another VM in the VM. But it still 0.00 in the report .
Can anybody tell me why? Is anything wrong in my vCenter Database?
Thank you very much!Hello,
Thanks for sending the data.
The query that we asked you to run gives the values of network transmit-receive and disk read-write that CBM data collector has collected from VC.
A closer look at the output of the query shows that CBM has been able to collect data for network received and transmit (resource_id=6) from VC. The attached report contains costs for network received and transmit.
But there is no data for disk read and write (resource_id=2). So the report shows 0 costs for disk read and write.
Now we need to know the cause of absence of this data from CBM DB. As a first step, we will like to see if VC DB contains disk read-write related data. Can you please attach the output of following steps?
1) Please run following query on CBM DB
select entity_moid from cb_vc_entity where vc_entity_id=(select vc_entity_id from cb_vc_entity_mapping where cb_entity_id=(select entity_id from cb_entity where entity_name='work_GJL'))
This will give 'moid'. This moid should be used in the next query.
2) Please run following queries on VC DB.
We need outputs from 4 tables namely:
i) VPXV_HIST_STAT_YEARLY
ii) VPXV_HIST_STAT_MONTHLY
iii) VPXV_HIST_STAT_WEEKLY
iv) VPXV_HIST_STAT_DAILY
Substitute <table name> in the following query with each one of these and store the outputs of different queries independently and attach here.
select SAMPLE_TIME, SAMPLE_INTERVAL, STAT_VALUE from <table name> where ENTITY LIKE 'moid from above query' AND STAT_NAME = 'usage' and STAT_GROUP = 'disk' and STAT_ROLLUP_TYPE = 'average' order by SAMPLE_TIME
Thanks,
Mugdha -
How to send an external mail(PDF) through SCOT
Dear All,
We have a requirement to mail customer invoice converting the smartform into a PDF. All the neccesary configuration that needs to be done in NACE and SPRO transactions has been done.Now when we issue the output through an output type through transaction VF03, output is successfully issued.In SP02 also it shows the printing is successfully completed.
But when i want to want to check the display log aginst the
output type and billing doc issued it says dat the processing log dosnt exist.We have configured the SCOT transaction for sending external email but it doesnt send the mail and even doesnt show any request in waiting and as well as no errors so we are not being able to identify whether the configuration done is wrng which doesnt seem to be the case or is dere a problem with SCOT configuration.
Could any1 pls help me with the entire flow of configuring SCOT if possible with screenshots for sending external email . or if possible wat is the possible problem we are facing...
Awaiting ur reply shortly.
Thanks & Regards,
Lailu Philip.Internal Table declarations
DATA: i_otf TYPE itcoo OCCURS 0 WITH HEADER LINE,
i_tline TYPE TABLE OF tline WITH HEADER LINE,
i_receivers TYPE TABLE OF somlreci1 WITH HEADER LINE,
i_record LIKE solisti1 OCCURS 0 WITH HEADER LINE,
Objects to send mail.
i_objpack LIKE sopcklsti1 OCCURS 0 WITH HEADER LINE,
i_objtxt LIKE solisti1 OCCURS 0 WITH HEADER LINE,
i_objbin LIKE solisti1 OCCURS 0 WITH HEADER LINE,
i_reclist LIKE somlreci1 OCCURS 0 WITH HEADER LINE,
Work Area declarations
wa_objhead TYPE soli_tab,
w_ctrlop TYPE ssfctrlop,
w_compop TYPE ssfcompop,
w_return TYPE ssfcrescl,
wa_doc_chng typE sodocchgi1,
w_data TYPE sodocchgi1,
wa_buffer TYPE string,"To convert from 132 to 255
Variables declarations
v_form_name TYPE rs38l_fnam,
v_len_in LIKE sood-objlen,
v_len_out LIKE sood-objlen,
v_len_outn TYPE i,
v_lines_txt TYPE i,
v_lines_bin TYPE i.
call function 'SSF_FUNCTION_MODULE_NAME'
exporting
formname = 'ZZZ_TEST1'
importing
fm_name = v_form_name
exceptions
no_form = 1
no_function_module = 2
others = 3.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
w_ctrlop-getotf = 'X'.
w_ctrlop-no_dialog = 'X'.
w_compop-tdnoprev = 'X'.
CALL FUNCTION v_form_name
EXPORTING
control_parameters = w_ctrlop
output_options = w_compop
user_settings = 'X'
IMPORTING
job_output_info = w_return
EXCEPTIONS
formatting_error = 1
internal_error = 2
send_error = 3
user_canceled = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
i_otf[] = w_return-otfdata[].
call function 'CONVERT_OTF'
EXPORTING
format = 'PDF'
max_linewidth = 132
IMPORTING
bin_filesize = v_len_in
TABLES
otf = i_otf
lines = i_tline
EXCEPTIONS
err_max_linewidth = 1
err_format = 2
err_conv_not_possible = 3
others = 4.
Fehlerhandling
if sy-subrc <> 0.
endif.
loop at i_tline.
translate i_tline using '~'.
concatenate wa_buffer i_tline into wa_buffer.
endloop.
translate wa_buffer using '~'.
do.
i_record = wa_buffer.
append i_record.
shift wa_buffer left by 255 places.
if wa_buffer is initial.
exit.
endif.
enddo.
Attachment
refresh:
i_reclist,
i_objtxt,
i_objbin,
i_objpack.
clear wa_objhead.
i_objbin[] = i_record[].
Create Message Body
Title and Description
i_objtxt = 'test with pdf-Attachment!'.
append i_objtxt.
describe table i_objtxt lines v_lines_txt.
read table i_objtxt index v_lines_txt.
wa_doc_chng-obj_name = 'smartform'.
wa_doc_chng-expiry_dat = sy-datum + 10.
wa_doc_chng-obj_descr = 'smartform'.
wa_doc_chng-sensitivty = 'F'.
wa_doc_chng-doc_size = v_lines_txt * 255.
Main Text
wa_doc_chng-doc_size = ( v_lines_txt - 1 ) * 255 + strlen( i_objtxt )
clear i_objpack-transf_bin.
i_objpack-head_start = 1.
i_objpack-head_num = 0.
i_objpack-body_start = 1.
i_objpack-body_num = v_lines_txt.
i_objpack-doc_type = 'RAW'.
append i_objpack.
Attachment
(pdf-Attachment)
i_objpack-transf_bin = 'X'.
i_objpack-head_start = 1.
i_objpack-head_num = 0.
i_objpack-body_start = 1.
Länge des Attachment ermitteln
describe table i_objbin lines v_lines_bin.
read table i_objbin index v_lines_bin.
i_objpack-doc_size = v_lines_bin * 255 .
i_objpack-body_num = v_lines_bin.
i_objpack-doc_type = 'PDF'.
i_objpack-obj_name = 'smart'.
i_objpack-obj_descr = 'test'.
append i_objpack.
clear i_reclist.
i_reclist-receiver = '[email protected]'.
i_reclist-rec_type = 'U'.
append i_reclist.
call function 'SO_NEW_DOCUMENT_ATT_SEND_API1'
EXPORTING
document_data = wa_doc_chng
put_in_outbox = 'X'
TABLES
packing_list = i_objpack
object_header = wa_objhead
CONTENTS_BIN = i_objbin
contents_txt = i_objtxt
receivers = i_reclist
EXCEPTIONS
too_many_receivers = 1
document_not_sent = 2
document_type_not_exist = 3
operation_no_authorization = 4
parameter_error = 5
x_error = 6
enqueue_error = 7
others = 8.
1. Use the Medium 7 for sending mails instead of 1 (printout)
2. Maintain the Mail IDs for all vendors in Vendor Master(check the ADR6 table)
3. Do the settings for sendiing mails in SCOT and SOST tcodes with the help of Basis person
4.In NAce also use medium 7 instead of 1 (print)
5. In the application doc Po Me22 N also configure the output type with the medium 7 for the partner(vendor) and other sommunication method settings.
6. CHeck the print program, I think it will consider the Medium and set the communication type automatically. check it by keeping a break point if it is not working.
n the Smartform Functiona Module we get the Spool ID of the Smartform for the output
use that spool ID and convert to PDF format and send mail using the fun modules
CONVERT_ABAPSPOOLJOB_2_PDF
SO_NEW_DOCUMENT_ATT_SEND_API1
you have to write related code for these fun modules in the Smartform driver program itself
and ask your basis persons to configure the SCOT and SOST tcodes to send mails to Outside Consignee Maid Id's
REWARD POINTS IF HELPFUL
rewar points if helpful. -
How to view pdf, xls files generated by using JasperExport
I am using Weblogic Server 8.1 to deploy my project. I have some pdf, xls files that generated by JasperExport. Code: <br>
Map parameters = new HashMap(); <br>parameters.put("donvi",DVi); String ConnectionURL ="jdbc:oracle:thin:@localhost:1521:qltb"; <br>Class.forName("oracle.jdbc.driver.OracleDriver"); <br>Connection jdbcConnection = DriverManager.getConnection(ConnectionURL,"qltb","qltb12345");<br> jasperReport = JasperCompileManager.compileReport("C:\\baocao.jrxml"); <br>jasperPrint = JasperFillManager.fillReport(jasperReport,parameters,jdbcConnection);<br> //JasperViewer.viewReport(jasperPrint); <br>JasperExportManager.exportReportToPdfFile(jasperPrint, "report.pdf");<br>
report.pdf is exported to folder <br> C:\bea\user_projects\domains\cems ( cems is name of domain) . <br> But I dont know how to view this file or write relative path of this file For example : in test.jsp page I write Report 1 But it doesnt work.Install IronTrack SQL as described in the following link
http://www.irongrid.com/documentation/irontracksql/install.html#install_oracle9iAS -
How to Reduce Clusetering Factor on Table?
I am seeing a very high clustering factor on an SDO geometry table in our 10g RAC DB on our Linux boxes. This slow performance is repeateable on othe r Linux as well as Solaris DBs for the same table. Inserts go in at a rate of 44 milliseconds per insert and we only have about 27000 rows in the table. After viewing a VERY slow insert of about 600 records into this same table, I saw the clustering factor in OEM. The clustering factor is nearly identical to the # rows in the table indicating that useability of the index is fairly low now. I have referenced Metalink Tech Note 223117.1 and, while it affirms what I've seen, I am still trying to determine how to reduce the Clustering Factor. The excerpt on how to do this is below:
"The only method to affect the clustering factor is to sort and then store the rows in the table in the same order as in they appear in the index. Exporting rows and putting them back in the same order that they appeared originally will have no affect. Remember that ordering the rows to suit one index may have detrimental effects on the choice of other indexes."
Sounds great, but how does one actually go about storing the rows in the table in the same order as they appear in the index?
We have tried placing our commits after the last insert as well as after every insert and the results are fairly neglible. We also have a column of type SDE.ST_GEOMETRY in the table and are wondering if this might also be an issue. Thanks in advance for any help.
Matt SauterJoel is right that the clustering factor is going to have absolutely no effect on the speed of inserts. The clustering factor is merely one, purely statistical, factor the optimiser makes use of to determine how to perform a SELECT statement (i.e., do I bother to use this index or not for row retrieval). It's got nothing to do with the efficiency of inserts.
If I were you, I'd be looking at factors such as excessive disk I/O taking place for other reasons, inadequate buffer cache and/or enqueue and locking issues instead.
If you're committing after every insert, for example, then redo will have to be flushed (a commit is about the only foreground wait event -i.e., one that you get to experience in real time- that Oracle has, so a commit after every insert's really not a smart idea). If your redo logs are stored on, say, the worst-performing disk you could buy that's also doing duty as a fileserver's main hard disk, then LGWR will be twiddling its thumbs a lot! You say you've tested this, and that's fine... I'm just saying, it's one theoretical possibility in these sorts of situations. You still want to make sure you're not suffering any log writer-related waits, all the same.
Similarly, if you're performing huge reads on a (perhaps completely separate) table that is causing the buffer cache to be wiped every second or so, then getting access to your table so your inserts can take place could be problematic. Check if you've got any database writer waits, for example: they are usally a good sign of general I/O bottlenecks.
Finally, you're on a RAC... so if the blocks of the table you're writing to are in memory over on another instance, and they have to be shipped to your instance, you could have high enqueue waits whilst that shipment is taking place. Maybe your interconnect is not up to the job? Maybe it's faulty, even, with significant packet loss along the way? Even worse if someone's decided to switch off cache fusion transfer for the datafiles invoved (for then block shipment happens by writing them to disk in one instance and reading from disk in the other). RAC adds a whole new level of complexity to things, so good luck tracking that lot down!!
Also, maybe you're using Freelists and Freelist groups rather than ASSM, so perhaps you're fighting for access to the freelist with whatever else is happening on your database at the time...
You get the idea: this could be a result of activity taking place on the server for reasons completely unconnected with your insert. It could be a feature of Spatial (with which not many people will be familiar, so good luck if so!) It could be a result of the way your RAC is configured. It could be any number of things... but I'd be willing to bet quite a bit that it's got sod-all to do with the clustering factor!
You'll need to monitor the insert using a tool like Insider or Toad so you can see if waits and so on happen, more or less in real time -or start using the built-in tools like Statspack or AWR to analyze your workload after it's completed- to work out what your best fix is likely to be. -
We have just deployed a 4-node RAC cluster on 10GR2. We force a log switch every 5 minutes to ensure our Dataguard standby site is relatively up to date, we use the ARCH to ship logs. We are running to a very fast HP XP 12000 with massive amounts of write cache, so we never actually write straight to disk. However everytime we do a log switch and archive the log, we see a massive spike in the log file sync event. This is a real-time billing system so we monitor transaction response times in ms. Our response time for a transaction can go from 8ms to around 500ms.
I can't understand why this is happening, not only are our disks fast but we are also using asynch I/O and ASM. Surely with asynch I/O you should never wait for a write to complete.Log file sync event happens when client wait for LGWR finishes write to the log file after client said 'commit'. The way to reduce the number of the 'Log file sync' events is to increase the speed of LGWR process or not to commit that often.
You've described your disk system as very fast - what is the amount of data you write on every log switch? How does the performance of this write relates to your disk system tests? what block size did you use when testing the disk system? as far as I remember the LGWR uses OS block size and not the DB block size to write data to the disk. Try to experiment on your test system - put your log files on the virtual disk created in RAM and run the test case - do you see the delays?
With such restrictions for the transaction time you may want to look at Oracle Times-Ten database (http://www.oracle.com/database/timesten.html)
Since you've mentioned the 10gR2 you could probably use the new feature - asynchronous commit - in this case your transaction will not wait for the LGWR process. Be aware that using the NOWAIT commit opens a small possibility of data loss - the doc describes it quite clear.
http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_sqlproc.htm#CIHEDGBF
Mike -
Power PC G5 - scren goes grey and following message appears
This has happened 8 times today. Anything I can do. I receive the following message, and am told to power down holding the power button.
panic(cpu 0 caller 0x00C1917C): DART XBE DART Entry Exception: HyperTransport write logical page 0x00000
Latest stack backtrace for cpu 0:
Backtrace:
0x000954F8 0x00095A10 0x00026898 0x00C1917C 0x00B60D70 0x00B60DA8 0x00747048 0x002D1B8C
0x002D0A54 0x000A9714
Kernel loadable modules in backtrace (with dependencies):
com.apple.driver.AppleGPIO(1.1.9d0)@0xb5e000
dependency: com.apple.driver.IOPlatformFunction(1.8.0d12)@0x6c0000
com.apple.driver.MacIOGPIO(1.1.9d0)@0x745000
com.apple.driver.AppleMacRISC4PE(1.8.6f1)@0xc12000
dependency: com.apple.iokit.IOPCIFamily(1.7)@0x4ea000
dependency: com.apple.driver.IOPlatformFunction(1.8.0d12)@0x6c0000
Proceeding back via exception chain:
Exception state (sv=0x00ECA780)
PC=0x00000000; MSR=0x0000D030; DAR=0x00000000; DSISR=0x00000000; LR=0x00000000; R1=0x00000000; XCP=0x00000000 (Unknown)
Kernel version:
Darwin Kernel Version 8.11.0: Wed Oct 10 18:26:00 PDT 2007; root:xnu-792.24.17~1/RELEASE_PPCHi Carll14, and a warm welcome to the forums!
Resolving Kernel Panics...
http://www.thexlab.com/faqs/kernelpanics.html
See
Dr. Smoke's post here...
"This type of panic generally indicates a that a device has attempted to perform a Direct Memory Access (DMA) read or write to an unprepared page. DMA involves accessing memory while bypassing the CPU, a function of all modern computer architectures. Hypertransport is a high-speed bus architecture between the computer's memory controller and its device I/O.
Specifically, a device has tried to read or write memory via DMA that has not been prepared by an IOMemoryDescriptor.
Read-related panics reveal themselves by beginning in the panic log:
Code:
panic(cpu 1 caller hexaddress1): DART entry exception: HyperTransport read logical page hexaddress2
Write-related panic begins
Code:
panic(cpu 1 caller hexaddress1): DART entry exception: HyperTransport write logical page hexaddress2
where hexaddressn is a hexadecimal addresses, such as 0x00D4FAEC or 0x00C40.
The first Power Mac G5 desktops did not have an error facility for the DMA Address Relocation Table (DART). All later Power Mac G5 computers have such a facility. A kernel panic results when a PCI bus master device accesses memory at an address that is not mapped by the DART. Examples of PCI bus master devices include disk controllers, RAID controllers, and video cards employing DMA.
Generally this indicates an incompatible peripheral, often a PCI card, but incompatible AGP cards can also cause problems." -
How to restart VSS writers without rebooting
Hello fellow teckies.
I'm having this problem when backing up with Symantec Backup Exec 11d where it generates errors about not being able to backup VSS sections on the C drive.
I've already posted with Symantec.
https://www-secure.symantec.com/connect/forums/vss-c-drive-errors-when-backing-server
They're saying to re-register and restart the services. My question is, can the services be restarted without rebooting?
Unfortunately rebooting is not an option as this is a critical production server.
If anyone can answer, please let me know.
Thanks :)What Santhosh mentioned is on the right direction. To restart a VSS writer, you need to restart the services or process that hosts the writer. Generally speaking, we will
take the following steps when encountering VSS writer related issues:
Retry the backup or restore operation that caused the error condition.
If this does not resolve the issue, restart the service or process that hosts the writer, and retry the operation.
If this does not resolve the issue, open Event Viewer as described in the "Open Event Viewer and view events related to VSS" section and look for events related to the
service or process that hosts the writer. If necessary, restart the service or process, and retry the operation.
If this does not resolve the issue, restart the computer, and retry the operation.
If restarting the computer does not resolve the issue, provide the Event Viewer information to the vendor whose application is indicated in the event text.
http://technet.microsoft.com/en-us/library/ee264212(WS.10).aspx
As for the impact of the restart, it generally will not cause bad effect. However, it depends on the service which is restarted. Generally speaking, a service should handle
all the related task, such as writing all data in memory into disk and committing the transaction, etc, when a properly restart operation is performed. If there are any particular requirement or order to follow to restart a service or application, you should
follow the recommended restart procedure.
Meanwhile, it is reported that when performing Shadow Copy Components backups with the File Server Resource Manager (FSRM) installed you may encounter this issue. Please
refer to the following Symantec article:
http://www.symantec.com/business/support/index?page=content&id=TECH48419
NOTE: This response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft does not control
these sites and has not tested any software or information found on these sites; therefore, Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. There are inherent dangers in the
use of any software found on the Internet, and Microsoft cautions you to make sure that you completely understand the risk before retrieving any software from the Internet.
Laura Zhang - MSFT -
Encore CS3 locks up while loading
Encore crashes while loading. It stops at loading Media Layer: DXCaptureSource.d and causes the application to hang.
szAppName : Adobe Encore.exe szAppVer : 3.0.1.8 szModName : hungapp
szModVer : 0.0.0.0 offset : 00000000
After installing DVD writer-related software, i.e. Nero and Lightscribe I noticed the problem. I have since removed all of these applications and their values in the registry - to no avail. Encore will only run if I start the system with basic drivers only (MSCONFIG - Diagnostic Startup). Can anyone help? Thank you.Supplemental Information: Upon disabling Windows Audio Service (in MSCONFIG), Encore will load properly. The problem is that without Windows Audio Services you have no sound. Can ADOBE look into this conflict, please?
-
Guys who passed OSP, please take a look
Hi all!
I am working a DBA 1 year and next summer I want try to pass OCA\OCP exams, so I decide to begin study now. Found some books on Amazon, maybe someone already have bought it and can advise what books are be more helpfull/best:
1 http://www.amazon.com/Oracle-Database-Administration-Exam-Guide/dp/0071597093/ref=sr_1_11?ie=UTF8&qid=1347816347&sr=8-11&keywords=ocp+oracle
2 http://www.amazon.com/Oracle-Database-Fundamentals-Guide-ebook/dp/B001AEF8W0/ref=sr_1_10?ie=UTF8&qid=1347816385&sr=8-10&keywords=oca+oracle
3 http://www.amazon.com/Oracle-Database-Administration-Guide-1Z0-052/dp/0071591028/ref=sr_1_4?ie=UTF8&qid=1347816329&sr=8-4&keywords=oca+oracle
OR I can buy only this and that will be enought:
http://www.amazon.com/Oracle-Database-All--Guide-CD-ROM/dp/0071629181/ref=sr_1_1?ie=UTF8&qid=1347816310&sr=8-1&keywords=oca
Also, maybe there are better books?
Thanks!I will acknowledge, up front, that what I am about to write relates to me specifically and organizations I am familiar with in the US. I know that in other countries the experience can be different. That said ...
APC opened the door so I'll walk in too. Not only will passing an Oracle exam not get you a job when I interview ... it won't get you an interview unless two conditions exist. (1) The person paid for the class personally, not their company, and (2) it is an entry level position of which there are very few these days.
What will get someone an interview with me is experience ... in a world with lots of experienced candidates ... an exam is irrelevant. Quite simply because the most important thing to me is what people learned by doing stupid stuff, trashing servers, truncating tables, dropping file systems, experiencing a worthless backup. Everyone can buy a book or take a class and learn what to do. Oracle U is fantastic at teaching people what TO do. But what no one except hard experience can teach is what not to do. When to push yourself away from the keyboard and walk around the building.
So how to get experience if you don't have any? That is the sticky wicket I think one might say. What I watched many of my students from university do was get jobs only tangentially related to Oracle ... developer, data analyst, etc. and then volunteer to help at every opportunity. Some of my students donated time at the local office of the Red Cross which uses Oracle to get experience. I was even able to help some of my students with internships with companies such as Boeing and even the Seattle Police Department. Do that for a year, read the books and blogs written by the Tom Kytes, Richard Footes, etc. of the world and in a year you can be better than most people with 5 years of experience.
BTW: I am not suggesting you put 2 and 2 together from what I've written and trash a database at a company like Boeing. But sit around and keep your eyes open and you will watch others do it. Put together your own Oracle install at home and trash that. I used to tell my students that if in the course of a quarter ... they had not destroyed and rebuilt their learning databases at least twice ... they weren't trying hard enough. -
Hi Everyone,
I need a query to get the high CPU and Physical_IO utilized queries executed for last 10 days with program name and Query type. Query type is to get what kind of query it is e.g. Stored Procedure, SQL Batch like that. I tried with some queries but no luck.
Please some one help out to get this data.
ThanksTry this
-- Top Cached SPs By Total Physical Reads, Physical reads relate to disk I/O pressure - SP Physical Reads - 2008
SELECT TOP(25) p.name AS [SP Name],qs.total_physical_reads AS [TotalPhysicalReads],
qs.total_physical_reads/qs.execution_count AS [AvgPhysicalReads], qs.execution_count,
qs.total_logical_reads,qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
AND qs.total_physical_reads > 0
ORDER BY qs.total_physical_reads DESC, qs.total_logical_reads DESC OPTION (RECOMPILE);
-- Logical writes relate to both memory and disk I/O pressure
SELECT TOP(25) p.name AS [SP Name], qs.total_logical_writes AS [TotalLogicalWrites],
qs.total_logical_writes/qs.execution_count AS [AvgLogicalWrites], qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Minute, qs.cached_time, GETDATE()), 0) AS [Calls/Minute],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time],
qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
AND qs.total_logical_writes > 0
ORDER BY qs.total_logical_writes DESC OPTION (RECOMPILE);
Thanks
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Is it possible to get the style, font and related info of a paragraph text of a in design file and write all the stuff on the same in design file on the left side with small fonts
as
Lets this is a text in in design file :
style : abc we are going to check the condition Agence Wallonne pour la Promotion d'une Agricultur we are going to check the condition Agence Wallonne pour la font 12 d'une Agricultu we are going to check the condition Agence Wallonne pour la Promotion d'une Agricultu
style : xyz we are going to check the condition Agence Wallonne pour la Promotion d'une Agricultur we are going to check the condition Agence Wallonne pour la font 10 d'une Agricultu we are going to check the condition Agence Wallonne pour la Promotion d'une AgricultuHi Poojith
Not sure if this would solve your requirment but just in case might be helpful:
1. We can mix up the HTML and HTMLB components in the JSP Page. However, can access only the HTMLB components in the controller. The following link refers to what customizations are offered by the HTMLB framework:
[http://www.sapdesignguild.org/resources/htmlb_guidance/]
2. Another option would be to use AbstractPortalComponents or a simple web app if that's feasible. (where custom UI themes, css and layout are more in control of the developers.)
Thanks
Deepak -
Just to give an example, Microsoft IIS webserver provides the
performance counters, which one can read from registry using
APIs provided by Microsoft from a C/C++ program and get all the
performance related data ... r some similar interfaces
provided by iPlanet webserver ???The spell checked version...
I really appreciate the replies. I have looked into RMI and think it might fit the bill, ,BUT, let me clarify a bit and see if there are any other ideas floating around out there.
A user, using a web interface on machine A will click the , "I want my file" button. This flags the DB to create a file for that user. I will have multiple daemons running on other machines B,C,D whose sole job is to check the db, compile the file (This is the part that takes huge overhead so I wanted it distributed), then write the file. The trick is the file needs to be written to machine A. So I figure, using RMI, I can write a simple "write this file" object that will accept a byte stream or byte array, and a correct path to write to. Does this sound like a good methodology?
I've never done anything like this so I am really shooting in the dark.
Thanks again for the posts.
Paul
Maybe you are looking for
-
how do i install apple drivers on a mac that already has windows 8.1?
-
Hi all, This is a very old functionality am stuck with.. I need to print the Table in Webdynpro Screen. Donts : I dont have a PrintButton UI element present in my NWDS7.0 Version.Sp13. I dont have the "com.sap.tc.webdynpro.clientserver.print.api" API
-
Some conditional text in text inset showing when text inset is set to condition and hidden
I'm using Frame 8 on XP Pro SP3. I have some text insets that I want to mark to appear or disappear depending on whether the document needs them or not. One is "domestic" and the other is "row". My problem is that within the text inset is a condition
-
We would like to import/open directly a cdr file into Indesign instead of making an export into a indesign file from Corel Draw Is it possible ? Thank you
-
Lightroom vs Lightroom CS6 creative cloud
I had lightroom 4.2 on my laptop but after I got a creative cloud subscription, I uninstalled it and installed a CC version. Today my system says that I have Lightroom 4.3 64 bit (not Lighroom 4.3 64bit CS6) (and indeed that is what it is when I