Duplicate records at a particular time
Hi, i am getting duplicate records at a particular point of time could you please help me with this regard.
The drive in which I installed informatica ran out of disc space. So I found this in the error log SF_34125 Error in writing storage file [C:\Informatica\9.0.1\server\infa_shared\Storage\pmservice_Domain_ssintr01_INT_SSINTR01_1314615470_0.dat]. System returns error code [errno = 28], error message [No space left on device]. Then I tried to shut down the integration service and then freeup some space on the disc. I got the following message in the log file LM_36047 Waiting for all running workflows to complete.SF_34014 Service [INT_SSINTR01] on node [node01_ssintr01] shut down. Then when I tried to start the integration service again, I got the following error Could not execute action... The Service INT_SSINTR01 could not be enabled due to the following error: [DOM_10079] Unable to start service [INT_SSINTR01] on any node specified for the service After this I am not able to find any entry in the log file for the integration service. So I went to the domain log to find out more details. I found these in the domain log DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.Then I tried shutting down the domain and restarting the informatica service again. I got the following error when the Integration service was initializedDOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10009 Service process [INT_SSINTR01] output [Informatica(r) Integration Service, version [9.0.1], build [184.0604], Windows 32-bit].SPC_10009 Service process [INT_SSINTR01] output [Service [INT_SSINTR01] on node [node01_ssintr01] starting up.].SPC_10009 Service process [INT_SSINTR01] output [Logging to the Windows Application Event Log with source as [PmServer].].SPC_10009 Service process [INT_SSINTR01] output [Please check the log to make sure the service initialized successfully.].SPC_10008 Service Process [INT_SSINTR01] output error [ERROR: Unexpected condition at file:[..\utils\pmmetrics.cpp] line:[2118]. Application terminating. Contact Informatica Technical Support for assistance.].SPC_10012 Process for service [INT_SSINTR01] terminated unexpectedly.DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service. I tried creating a new integration service and associating it with the same repository. Even then I got the same error. So I tried creating a new repository and a new integration service. Even then I got the same error. What might be the workaround to start the integration service?
Similar Messages
-
Remove duplicate records in Live Office, caused by CR Groups
hello all
i have a CR with groups. all works well, until i use the report in live office, were it is duplicating the group data for each of the detail records
i have removed the details from the CR report, leaving only the group data, but it still happens
anyone have a work around ?
thanks
gHi,
First you select the report name from the left panel and check whether option is coming on not.
or you try with right click on any report cell then go to live office and object properties.
Second , you are getting duplicate record in the particular this report or all reports.And how many time highlight expert you are using in this report.
Thanks,
Amit -
Hi everyone,
I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
LOAN RECORD NO. LANGUAGE CODE
123456 ENG
123456 FRE
So, although the loan only occurred once I have two instances of it in my report.
I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
ENG 1
FRE 1
A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
LOAN RECORD LANGUAGE CODE
123456 ENG, FRE
Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
Thanks!if you create a group by loan
then create a group by language
place the values in the group(loan id in the loan header)
you should only see the loan id 1x.
place the language in the language group you should only see that one time
a group header returns the 1st value of a unique id....
then in order to calculate avoiding the duplicates
use manual running totals
create a set for each summary you want- make sure each set has a different variable name
MANUAL RUNNING TOTALS
RESET
The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
whileprintingrecords;
Numbervar X := 0;
CALCULATION
The calculation is placed adjacent to the field or formula that is being calculated.
(if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
whileprintingrecords;
Numbervar X := x + ; ( or formula)
DISPLAY
The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
whileprintingrecords;
Numbervar X;
X -
DELIVERY OF DUPLICATE RECORDS?
hi friends,
delivery of duplicate records under general info tab at datasource in bi.
undefined , allow,none,
my view 2 options will be enough allowed or not allowed,
what is purpose undefined , none ?
regards
suneel.This indicator gives information on how the DataSource behaves within a request with regard to duplicate records:
' ' The status of the DataSource is unknown.
'0' The DataSource does not deliver any duplicate records within a request, with reference to its key.
'1' The DataSource can deliver duplicate records within a request, with reference to its key. However, no duplicate records are delivered in a data package.
This indicator is particularly important for delta-capable attribute tables and text tables.
For the settings '0' and '1' you also have to define a key for the DataSource. This can be either in the DDIC using the maintenance for the corresponding field property of the extract structure fields, or (alternatively or additionally) in the metadata of the Datasource. A field in the DataSource also has the additional attribute ' DataSource Key Field', which transfersor corrects the DDIC property where necessary.
Use
DataSources can, for a key, transfer time-independent master data or time-independent texts from multiple data requests in one request to BW. If data records within a request are transferred to the BW more than once, in some circumstances this can be explained as application relevant and is therefore not considered to be an error. BW provides functionalities for handling duplicate data records that can handle such an ambiguity.
Dependencies
The DataSource transfers the information concerning whether it is transferring potential duplicate data records. This information is given to the scheduler when creating new InfoPackages. In the scheduler you can determine how the system responds to duplicate data records. -
Query on deletion of duplicate data based on insertion time
Hi All,
I have loaded nearly 3 crore records in a table using SQL*Loader
(There are four datafiles)
Nearly 10 Lakhs records of a particular data file are duplicated on the database due to loading it by mistake another time.
Table has no constraints defined on it.
I know the approximate insertion time of the records.
Is there any way to find the rows based on insertion time and delete those duplicate rows?
Thanks,
KishoreWhat version of Oracle? If you are on 10g and you created the tables to enable row movement and the loads were relatively recent, you may still be able to convert the ORA_ROWSCN pseudocolumn to a time and identify the rows that way. If you're on an earlier version, you didn't specify ENABLE ROW MOVEMENT when you created the table, or you did the load some time ago, you're probably out of luck.
If you're in ARCHIVELOG mode and you were doing logged loads and you still have the archived logs from the time period in question, you may be able to use LogMiner to identify the rows in question. I'd wager, though, that it would be faster just to query the table looking for duplicate rows and delete those rows.
Justin -
How to suppress duplicate records in rtf templates
Hi All,
I am facing issue with payment reason comments in check template.
we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
Attached screen shot, template and xml file for your reference.
Thanks,
Sagar.I have CRXI, so the instructions are for this release
you can create a formula, I called it cust_Matches
if = previous () then 'true' else 'false'
IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
Select the x/2 to the right of Supress in the formula field type in
{@Cust_Matches} = 'true'
Now every time the {@Cust_Matches} is true, the CustID should be supressed,
do the same with the other fields you wish to hide. Ie Address, City, etc. -
USE of PREVIOUS command to eliminate duplicate records in counter formula
i'm trying to create a counter formula to count the number of documents paid over 30 days. to do this i have to subtract the InvDate from the PayDate. and then create a counter based on this value. if {days to pay} is greater than 30 then 1 else 0.
then sum the {days to pay} field to each group. groups are company, month, and supplier.
becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field.
so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
I've tried Distinct Count based on this formula if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
so i'm come to this.
if Previous() do not equal
then
if {day to days} greater than 30
then 1
else 0.00
else 0.00
but it doesn't work. i've sorted the detail section by
does anyone have any knowledge or success using the PREVIOUS command in a report?
Edited by: Fred Ebbett on Feb 11, 2010 5:41 PMSo, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula.
I don't understand why you're telling the formula to return 0.00 in your if statement.
In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices". So, invoice A can have payments 1, 2 and 3. And Payment 4 can be associated with invoice B and C? Ugh. Still though, you're evaluating every row of data. If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
Hope this points you in the right direction.
Eric -
How to do validation on multi record block at run time...
Dear Friends,
I have to do validation at run time on multi record block.
If user tries to enter and save two or more records in same multi record block with same data at run time, it should not allow it and display a message like, " error : Records are having same(duplicate) values. "
I already did validation with the data coming from database table using cursor. But confused about how to do at run time(on one screen only) ??
Please, give details about how to check duplicate records on form block before saving it to the table ????
Regarding details would be greatly helpful.
thanks,Hi...Pankaj
thanx for your reply..
I already did validation for the data coming from table....
but I need to do it on form only...at run time...
exa...
In one multi record block
record no : column 1 : column 2 : column 3
1 abc 123 hi
2 abc 123 hi
so it should check on the form itself, here I m not getting data from table...I am just
navigating from first record to second using down arrow.
so, may be 2 possibilities.
1) when user navigates using tab to third record, it should say like two duplicate records.(may be like when validate record or item)
2) or when user tries to save, it should say two duplicate records.
so, everyting should take place on form screen only.....
waiting for your reply,
thanx... -
HOW TO create a temp table or a record group at run time
i have a a tabular form and i dont want to allow the user entering duplicate
records while he is in insert mode and the inserted records are new and not exsisting in the database.
so i want to know how to create a temp table or a record group at run time to hold the inserted valuse and compare if they are exsiting in previous rows or no.
please help!As was stated above, there are better ways to do it. But if you still wish to create a temporary block to hold the inserted records, then you can do this:
Create a non-database block with items that have the same data types as the database table. When the user creates a new record, insert the record in the non-database block. Then, before the commit, compare the records in the non-database block with those in the database block, one at a time, item by item. If the record is not a duplicate, copy the record to the database block. Commit the records, and delete the records in the non-database block. -
How can i produce a stimulus at a particular time (eg 30 ms in the window)
Hi,
I would like to record EMG from two muscles (mastiod and upper sternum) before and after a stimulus (tone pip produced by a head phone). For this reason, I have created a program with two time loops (plz see the attachment). In the second time loop, I want to produce a tone after 30 ms (duration 6-10 ms) of the ongoing voluntary EMG ( so the second time loop will be used to see the tone pip at a particular time only) and In the first time loop, I want to see the ongoing voluntary EMG activity and to monitor the effect of the tone after the stimulus .
I can hear the tone in my program and can change the frequency of the tone but could not get the tone at 30 ms (at a particular time in the window). I made offset 30 ms in the second time loop but could not get the stimulus at 30 ms.
Is there any way to get the tone pip stimulus after 30 ms of the ongoing EMG?
please see the attached program.
Thanks in advance for your kind help
regards
Milly
Attachments:
new_2.vi 359 KBHi Milly,
If I understand your question correctly, you would like to synchronize your tone application with your analog input application at a specific amount of time. First, your program will run both loops "simultaneously", however, I do not see any condition set in your DAQ acquisition stating that 30ms has passed and allow for the tone measurement to happen. Keep in mind the trigger that you are going to send, if you are not using LV real time, will have millisecond resolution. I would suggest you add conditions that time your loops- specifically a VI called "Time Elapsed". This VI will allow you to find out how much time has elapsed and send a boolean that could be used to send the other loop as a trigger.
I hope this helps,
Regards,
Nadim
Applications Engineering
National Instruments -
Write Optimized DSO Duplicate records
Hi,
We are facing problem while doing delta to an write optimized data store object.
It gives error "Duplicate data record detected (DS <ODS name> , data package: 000001 , data record: 294 )
But it can not have an duplicate record since data is from DSO and
we have also checked in PSA the particular record and there we couldn't find the duplicate record.....
There is no much complex routine also.....
Have any one ever faced this issue and got the solution? Please let me know if yes.
Thanks
VJRavi,
We have checked that there is no duplicate records in PSA.
Also the source ODS has two keys and target ODS is having three Keys.
Also the records that it has mentioned are having Record mode "N" New.
Seems to be issue with write-ptimized DSO.
Regards
VJ -
Purchase Order Import inserts duplicate records in po_line_locations
Hi,
I'm running standard Purchase Order Import program to import few PO's. We have only one shipment for each item, so its only one record for each line in po_line_locations. But after running the import, it inserts a duplicate record with same qty into po_line_locations. Basically it is inserting same item twice in po_line_locations_all table and quantity is getting doubles at the line level. Seached Metalink but no hits for this, till now.
This is in R12 (12.0.6).
Did anyone encounter this problem earlier? Any hints or comments would help.
Thanks in advance.
Edited by: user2343071 on Sep 2, 2009 3:54 PMHi,
Once you please debug the particular program with the help of ABAPer. That may resolve your issue. Thanking you -
Duplicate record with same primary key in Fact table
Hi all,
Can the fact table have duplicate record with same primary key . When i checked a cube i could see records with same primary key combination but the key figure values are different. My cube has 6 dimentions (Including Time,Unit and DP) and 2 key figures. So 6 fields combined to form the composite primary key of the fact table. When i checked the records in se16 i could see duplicate records with same primary key. Ther are no parallel loading happening for the cube.
BW system version is 3.1
Data base is : Oracle 10.2
I am not sure how is this possible.
Regards,
PMHi Krish,
I checked the datapacket dimention also. Both the record have same dimention id (141). Except the Keyfigure value there is no other change in the Fact table record. I know this is against the basic DBMS primary key rule. But i have records like this in the cube.
Can this situation arise when same records is there in different data packet of same request.
Thx,
PM
null -
Duplicate Records & CFTRANSACTION
Maybe I'm missing the obvious on this, as I've never had a
problem with this before, but recently I developed a custom tag
that logs and blocks the IP addresses of all these recent DECLARE
SQL injection attempts.
The issue is however that my blocked IP address seems to be
getting duplicates here and there. My "datetime" field in the
database show the duplicates are all added to the database the
exact same second. What gives?
Shouldn't CFTRANSACTION be preventing such a thing, even if
multiple injection attempts come at the same time?I've always coded my applications where my primary key is my
database's autonumber field, and instead insure my coding is solid
enough to prevent duplicate records from appearing. Oddly enough
it's worked flawlessly until now.
Would not ColdFusion throw errors if I made the "ip_address"
field my primary key, and my code allowed for a duplicate record to
be entered? Am I interpretting the CFTRANSACTION code to do
something it doesn't do?
Also, the duplicates aren't causing problems, so a DISTINCT
select isn't necessary. The IP address is blocked whether one
record or fifty exist in my blocked_ip_addresses table. My goal is
just not to waste database space.
Any further help you can provide is MUCH appreciated!
Thanks! -
Duplicate records in PO scheduled line for framework order (going for dump)
Hi all,
i am creating framework purchase order with item category B. I am assigning external number range for PO. This PO is created with respect to expense PR. i just found there is duplicate records are appearing in schedule line for the same item.
Then, after i save the PO, it is going for dump & sending some message to SAP inbox that there is duplicate records.
later i can not find those PO in the system. Please let me know where i am doing mistake ? why duplicate records are appearing in PO scheduled line ?
Thanks a lot
pabiHi,
Once you please debug the particular program with the help of ABAPer. That may resolve your issue. Thanking you
Maybe you are looking for
-
Hello to all, I am still learning how to use this Macbook Pro and cannot understand why there is no app for putting JVC Everio video onto my laptop in order to edit and make a disc of my films while on holiday so that I can send the edited pieces to
-
Help me out to run Interactive forms in CE7.1
Hi experts, I am new to this SAP webdynPro for java World.Can you please help me to run interactive Forms. I have installed CE7.1 Dev_studio and J2EEServer.To Develop Adobe Interactive Forms i have installed Adobe Designer7.1. When i run the Applicat
-
When i try to open itunes its telling me to redownload it and the entry point can not be located
i try downloading the new softwear and once i did it told me ("the procedure entry point AVCPlayerItemDurationChangedNotification could not be located in the dynamic link libary AVFoundationCF.dl"l ()"ITunes was not installed correctlly. Please rein
-
Java.Lang.NuberFormatException when i install dbassis
after I install Oracle8i rel 2 on slackware 7.0 and I run dbassit, It said java.Lang.NuberFormatException: at java.lang.Long.parseLong (Compiled Code) at java.lang.Long.parseLong (Compiled Code) at WizOSD.getAvailableMemory (Compiled Code) at WizOSD.
-
I just got a iPad and upgraded both my iPhone and iPad to iOS7, now the iPad utilizes my phone number for iMessages so no text messages go to my phone. The iPad has a separate number so how do I correct this so they both receive messages? I use the