Select statement taking much time.......
Hi,
IF NOT i601[] is initial.
select vbelv
posnv
vbeln
posnn
vbtyp_v
matnr
from vbfa into table ivbfa
FOR ALL ENTRIES IN i601
where vbeln = i601-mblnr and
posnn = i601-zeile2 and
vbtyp_v = 'J'.
select vbeln
matnr
werks
lgort
vgbel
vgpos
mwsbp
from vbrp into table ivbrp
FOR ALL ENTRIES IN ivbfa
where vgbel = ivbfa-vbelv and
vgpos = ivbfa-posnv and
vgtyp = 'J' and
werks IN werks.
CLEAR i601.
FREE i601.
ENDIF.
At the above highlighted select statement it is getting stucked up.I was not able to figure out the reason.No loops nothing but still is not moving from the second select statement,quite a long time it is taking.Can any one here throw some light.By the way none of the fields in the where clause of the 2nd select are primary keys.
Thanks,
K.Kiran.
Hi,
In second table you are tring to extract the records without passing the primary key values...
Any how you have values vbeln ,posnr in internal table i601.So pass those values to VBRP.
IF NOT i601[] is initial.
select vbelv
posnv
vbeln
posnn
vbtyp_v
matnr
from vbfa into table ivbfa
FOR ALL ENTRIES IN i601
where vbeln = i601-mblnr and
posnn = i601-zeile2 and
vbtyp_v = 'J'.
IF NOT ivbfa[] IS INITIAL
select vbeln
matnr
werks
lgort
vgbel
vgpos
mwsbp
from vbrp into table ivbrp
FOR ALL ENTRIES IN ivbfa
where *vbeln = ivbfa-vbeln AND *
posnr = ivbfa-posnv AND
vgbel = ivbfa-vbelv and
vgpos = ivbfa-posnv and
vgtyp = 'J' and
werks IN werks.
ENDIF.
CLEAR i601.
FREE i601.
ENDIF.
Now check your program.
Pls. reward if useful.....
Similar Messages
-
Dear all ,
I am fetching data from pool table a006. The select query is mentioned below.
select * from a005 into table i_a005 for all wntries in it_table
where kappl = 'V'
and kschl IN s_kschl
and vkorg in s_vkorg
and vtweg in s_vtgew
and matnr in s_matnr
and knumh = it_table-knumh .
here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..
Thank You ,
And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ? Can somebody tell some genual reason ? Or improvement in my select query.........
Edited by: TVC6784 on Jun 30, 2011 3:31 PMTVC6784 wrote:
Hi Yuri ,
>
> Thanks for your reply....I will check as per your comment...
> bUT if i remove field KNUMH From selection condition and also for all entries in it_itab , than data fetch very fast As KNUMH is not primary key..
> . the example is below
>
> select * from a005 into table i_a005
> where kappl = 'V'
> and kschl IN s_kschl
> and vkorg in s_vkorg
> and vtweg in s_vtgew
> and matnr in s_matnr.
>
> Can you comment anything about it ?
>
> And can you please say how can i check its size as you mention that is 2-3 Mb More ?
>
> Edited by: TVC6784 on Jun 30, 2011 7:37 PM
I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.
Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.
I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.
You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.
If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer). -
Same select statement taking more time
Hello all,
I have two select statements. only the name of table from where it is fetching records are different.
1) select belnr posnr etenr into corresponding fields of table it_cdtemp2
from j_3avasso for all entries in it_cdtemp1
where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
it_cdtemp1 has 100 entries and j_3avasso has 20000 entries
2) select belnr posnr etenr into corresponding fields of table it_cdtemp2
from j_3avap for all entries in it_cdtemp1
where belnr = it_cdtemp1-vbeln and posnr = it_cdtemp1-posnr .
it_cdtemp1 has 100 entries and j_3avasso has 2000 entries
statement 1 is executing less than a minute where as statement 2 is taking around 15 to 20 minutes
could anyone suggest why.. if so how to minimize run time
Regards
BalaHi,
You can sort the internal table before using FOR ALL ENTRIES BY VBELN and POSNR.
This will save a lot of processing time.
You can also try combing both the selects as one join statement taking both the tables with for all entries addition.
Regards,
Subhashini
Edited by: Subhashini K on Oct 8, 2009 2:58 PM -
Select Statement taking more time.How to improve the query performance.
SELECT DISTINCT ORDERKEY, SUM(IMPRESSIONCNT) AS ActualImpressions ,SUM(DiscountedSales)AS ActualRevenue ,SUM(AgencyCommAmt) as AgencyCommAmt
,SUM(SalesHouseCommAMT) as SalesHouseCommAMT
--INTO Anticiapted_ADXActualsMeasures
FROM AdRevenueFact_ADX ADx WITH(NOLOCK)
Where FiscalMonthkey >=201301 and Exists (Select 1 from Anticipated_cdr_AX_OrderItem OI Where Adx.Orderkey=Oi.Orderkey)
GROUP BY ORDERKEY
Clustered indexes on orderkey,fiscalmonthkey and orderkey in AdRevenueFact_ADX(contain more than 170 million rows)
thanksAs mentioned by Kalman, if your clustered index starts with Orderkey, then this query will require a full table scan. If it is an option to change the clustered index in such a way that FiscalMonthkey is the leading column, then only the data of the last
two year has to be queried.
In addition, you should have a look at the indexes of table Anticipated_cdr_AX_OrderItem. Ideally, there is a nonclustered index on Orderkey.
To get better advice, please post the query plan and list all available indexes of these tables.
Finally, an off topic remark: it is a good practice to keep consistent spelling of object names, and to keep the same spelling as their declaration. Your query would cause serious problems if the database is ever run with case sensitive collation.
Gert-Jan -
LOV is slow after selecting a value its taking much time to default
Hi,
I have a dependent LOV. Master LOV is executing fine and its populatin into the field fastly. But Child LOV is very slow after selecting a value its taking much time to default.
Can any one please help me if there is any way to default the value fast after selecting a value?
Thanks,
MaheshHi Gyan,
Same issues in TST and PROD instances.
In my search criteria if i get 1 record, even after selecting that value its taking much time to default that value into the field.
Please advice. Thanks for your quick resp.
Thanks,
Mahesh -
Query taking much time.
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")user3479748 wrote:
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
) ;explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")
I dunno, it looks like you are getting all the things that are null with an outer join, so won't that decide to full scan anyways? Plus the union means it will do it twice and do a distinct to get rid of dups - see how it does a union all and then sort unique. Somehow I have the feeling there might be a more trick way to do what you want, so maybe you should state exactly what you want in English. -
Database tableAUFM hitting is taking much time even secondary index created
Hi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AMHi Friends,
There is report for Goods movement rel. to Service orders + Acc.indicator.
We have two testing Systems(EBQ for developer and PEQ from client side).
EBQ system contains replica of PEQ every month.
This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems have same data(Getting same output).
The report has the follwoing fields on the selection criteria:
A_MJAHR Material Doc. Year (Mandatory)
S_BLDAT Document Date(Optional)
S_BUDAT Posting Date(Optional)
S_LGORT Storage Location(Optional)
S_MATNR Material(Optional)
S_MBLNR Material Documen(Optional)t
S_WERKS Plant(Optional)
Client not agrrying to make Material Documen as Mandatory.
The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
BLDAT
BUDAT
MATNR
WERKS
LGORT
Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
What can be done to get teh report executed very fast.
<removed by moderator>
The part of report Soure code is as below:
<long code part removed by moderator>
Thanks and Regards,
Rama chary.P
Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
Please Read before Posting in the Performance and Tuning Forum
locked by: Thomas Zloch on Sep 15, 2010 11:40 AM -
ODS Activation is taking much time...
Hi All,
Some times ods activation is taking much time. Generally it takes 30 mins and some times it take 6 hours.
If this activation is taking much, then if i check sm50 ...i can see that there is a piece of code is taking much time.
SELECT
COUNT(*) , "RECORDMODE"
FROM
"/BIC/B0000814000"
WHERE
"REQUEST" = :A0 AND "DATAPAKID" = :A1
GROUP BY
"RECORDMODE"#
Could you please let me know what are the possiblites to solve this issue.
thanksHello,
you have 2 options:
1) as already mentioned, cleanup some old psa data or change log data from this psa table or
2) create a addtional index for recordtype on this table via Tcode se11 -> indexes..
Regards, Patrick Rieken. -
SELECT statement takes long time
Hi All,
In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
Already we have indexes for EQUNR in OBJK table.
Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
if not T_QMIH[] IS INITIAL.
SORT T_QMIH BY EQUNR.
REFRESH T_OBJK.
SELECT EQUNR OBKNR
FROM OBJK INTO TABLE T_OBJK
FOR ALL ENTRIES IN T_QMIH
WHERE OBJK~TASER = 'SER01' AND
OBJK~EQUNR = T_QMIH-EQUNR.
Thanks
AjayHi
You can use the field QMIH-QMNUM with OBJK-IHNUM
in QMIH table, EQUNR is not primary key, it will have multiple entries
so to improve the performance use one dummy internal table for QMIH and sort it on EQUNR
delete adjacent duplicates from d_qmih and use the same in for all entries
this will improve the performance.
Also use the fields in sequence of the index and primary keys also in select
if not T_QMIH[] IS INITIAL.
SORT T_QMIH BY EQUNR.
REFRESH T_OBJK.
SELECT EQUNR OBKNR
FROM OBJK INTO TABLE T_OBJK
FOR ALL ENTRIES IN T_QMIH
WHERE IHNUM = T_QMIH-QMNUM
OBJK~TASER = 'SER01' AND
OBJK~EQUNR = T_QMIH-EQUNR.
try this and let me know
regards
Shiva -
Query taking much time Orace 9i
Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AMHi Srinivas;
Please follow this search and see its helpful
Regard
Helios -
Discoverer report is taking much time to open
Hi
All the discoverer report are taking much time to open,even query in lov is taking 20 -25 min.s.We have restart the services but on result found.
Please suggest what can be done ,my application is on 12.0.6.
RegardsThis topic was discussed many times in the forum before, please see old threads for details and for the docs you need to refer to -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein -
Hi < i have updated my mac 10.5 with 10.5.8 sucessfully , but while instaling it is taking much time and status bar is not showing any progress
If I remember correctly one of the updates could hang after doing the update. Fixed by a restart but installing a combo update over the top of others rarely does any harm and it may be the best thing to do.
-
Adding column is taking much time. How to avoid?
ALTER TABLE CONTACT_DETAIL
ADD (ISIMDSCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
,ISREACHCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
Is there any way that to speed up the execution time of the query?
It's more than 24 hrs completed after started running the above script.
I do not know why it is taking much time.
Size of the table is 30 MB.To add a column the row directory of every record must be rewritten.
Obviously this will take time and produce redo.
Whenever something is slow the first question you need to answer is
'What is it waiting for?' You can do so by investigating by various v$ views.
Also, after more than 200 'I can not be bothered to do any research on my own' questions, you should know you don't post here without posting a four digit version number and a platform,
as volunteers aren't mind readers.
If you want to continue to withheld information, please consider NOT posting here.
Sybrand Bakker
Senior Oracle DBA
Experts: those who did read documentatiion and can be bothered to investigate their own problems. -
Taking much time for loading data
Dear All
While loading data in <b>PRD system</b> (for master data and transaction data), it's taking much time. For ex., 2LIS_05_QOITM, I am loading delta data only in <b>PSA</b> from R/3. Some times (yesterday) it's taking 2 mins and some times (today) it's taking 5 hrs not yet completed (it's yellow). Actually yesterday, we went to SM58 in R/3 side and executed that LUW for some other data sources. Now also we can do but we don't want to like that. Because we are expecting permanent soulution. Could u please advice me. I am getting below message in status tab
Errors while sending packages from OLTP to BI
Diagnosis
No IDocs could be sent to BI using RFC.
System Response
There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
Further analysis:
Check the TRFC log.
You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
Error handling:
If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the Errors while sending packages from OLTP to BI
Diagnosis
No IDocs could be sent to BI using RFC.
System Response
There are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of BI.
Further analysis:
Check the TRFC log.
You can access this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
Error handling:
If the TRFC is incorrect, check whether the source system is fully connected to BI. In particular, check the authorizations of the background user in the source system.
I am loading data through Process chain and user is <b>BWREMOTE (authorized user).</b>
Please help me.
Thanks a lot in advance
RajaDear Karthik
No I couldn't resolve till now
But Everything is fine.
Now status is yellow only (209 from 209). Now what i want to do.
Im getting below message <b>in status tab</b>
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not return any errors. The data transport from the source system to BI was probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path "Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In particular, check the remote user authorizations in BI.
<b>In detail tab</b>, I am getting below message
Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Thanks in advance
Raja -
While saving the Ai file its taking much time
I am using windows 7 and Ai CC, when i am working on files while saving the Ai file its taking much time. I got 4 GB RAM. now i got better system too with ram of 6GB. stillthe Ai is taking much time to save 300 MB file.
Thank you Jdanek.
1. I am saving the file in my local HDD only.
2. Scratch disk is F Drive [100 GB free space], there no data is stored.
3. Even i increased the Virtual memory too.
4. Now I switched off the "Save with PDF compatability" too, No luck!
Maybe you are looking for
-
How do i burn my itunes library onto a cd so that it can be played in a car
I want to burn some of the songs within my itunes library onto a cd to give to my Aunt. I tried this once already and the songs would play on radio but only from one speaker. How do I burn my playlists onto cd and have it play correctly? (out of both
-
Break Up- removing family plan--costly??
Hey folks, first post here- i appreciate in advance anyone reading this and or helping me figure this out so-- my and the fiance are splittin in a few days, her plus both of her parents are on my account- as a family plan--we both have data plans plu
-
ATV Shared Library Disconnects - only reconnects if i restart Itunes on ser
Help! I have recently tried to set up an all apple house, have itunes library running on macmini server shared (not via home sharing, but just via sharing) so that MacBook Pro, Apple TV etc can connect to the server library. Seemed to work fine, but
-
How to create a table which have always only one entry inside it?
Hi Experts, Is it possible to create a table in data dictionary which have always only one entry inside it. If we insert another entry, then it should throw an error.
-
Critical: Unable to read locator reference from NCP server
Can't say why/when this started, I just know it's there, NW 6.5/Sp5,, this occurs both with and without latest post-sp5 patches ( inc libc updates) If I do a dnipinst from the console and choose the context, it states that there are allready objects