Meter mas de 256 caracteres en una tabla de contenidos de Indesing CS. 2
Necesito ayuda para ver la forma de meter mas de 256 caracteres en una tabla de contenido en Indesing CS.2, o donde puedo conseguir un plugin.
No sé cuantos casos habrá con ese problema, digo cuantos enunciados tienen más de 256 caracteres, pero si son pocos lo más rápido es ir en "busca" de él, copiar y pegar. Si son muchos pues entonces dividir los enunciados en dos estilos identicos enunciado 011 enunciado 012 y poner ambos en la tabla de contenidos, la primero le pones sin número y al segundo sí. Es un chapuza, pero si te sirve, buena es.
No conozco ningún plug-in.
Un saludo, Mateo Sánchez
Similar Messages
-
Sobre los índices/Tablas de contenidos
Bueno la segunda pregunta del día.
Estoy creando una tabla de contenidos, lo que se denomina TOC. He observado que cuando tengo varias entradas de un mismo orden estas aparecen una detrás de otras separadas por un punto y coma en vez de un salto de párrafo. Me explico. Imaginemos una índice con las siguientes entradas.
Entrada nivel 01....... nº pág
Entrada nivel 02..... nº pág
Entrada nivel 02..... nº pág
Entrada nivel 01....... nº pág
Entrada nivel 01....... nº pág
Así es como me gustaría que saliera, pero lo que me sale es
Entrada nivel 01....... nº pág
Entrada nivel 02..... nº pág; Entrada nivel 02..... nº pág
Entrada nivel 01....... nº pág; Entrada nivel 01....... nº pág
O sea, cuando hay dos o más entradas del mismo nivel salen una a continuación de otras separadas por un punto y coma. ¿Hago algo mal? ¿Hay solución? La solución en un momento determinado es fácil, buscar ";" remplazar por un salto de párrafo, pero me gustaría que saliera automático.
Gracias por adelantado
MateoMateo, cuando ayer descubrí a través de tu otro mensaje que te referías a InDesign, te escibí una breve respuesta, pero por lo visto no la publiqué. Básicamente, te decía que el formateo de las entradas de la tabla de contenidos se hace mediante estilos, en el menú Estilos de la Tabla de Contenidos. Tu indeseado punto y coma lo controla la opción que en inglés se llama Ad-in y que en castellano puede llamarse algo así como "de corrido". En todo caso, el tema está bien explicado en el menú Ayuda de InDesign, al menos en la versión 2 que aún uso.
Como mi versión de InDesign está en inglés, puede que los nombres que he puesto no coincidan, pero estoy seguro que te las arreglarás. -
Como validar caracteres de una WA
hola, necesito una funcion o codigo abap para validar palabras que solo sean del alfabeto occidental, es decir lo que tengo es una WA con caracteres (pueden ser por ejemplo letras chinas o rusas) y solo saber si pertenece al alfabeto occidental.
bueno espero que me puedan ayudar!
saludosEstos enlaces pueden ayudarte:
Comparing Strings (SAP Library - ABAP Programming (BC-ABA))
Todo SAP: Operadores para manejo de String
Saludos. -
Hola deseo hacerles varias preguntas rápidas que me están impidiendo avanzar en mi trabajo:
Situación 1:
Estoy haciendo un manual y cada capítulo lo trabajé por aparte en InDesign CC-2014, con extensión indd. ¿Cómo hago para unir varios archivos indd en un solo documento final?
Situación 2:
Debo hacer una tabla de contenidos (TOC), ¿en qué momento la debo hacer? ¿Cuando tenga unidos todos los archivos en uno solo? o ¿Puedo hacer los TOC en cada uno de los capítulos?
Muchas gracias quedo atenta a sus respuestas.
Saludos,
JackieHere is a link to the proper forum for InDesign: InDesign
-
Como Puedo crear un indice de figuras en pages?
Amigos estoy haciendo un trabajo monografico en pages y me gustaria poder hacer un indice de las figuras y tabas, si alguien sabe como, me ayudaria mucho.
GraciasEscriba los títulos en un párrafo en las imágenes y darles un título * Estilo de párrafo.
Entonces, utilizar ese estilo de párrafo Leyenda cuando el índice docuemnt con una tabla de contenido.
Peter
* Sólo un nombre del estilo sensato no literalmente "Leyenda". -
Ordenar párrafos según estilos
Estimados
Quisiera saber si existe un script o plugin en indesign para reordenar los párrafos de un texto determinado por estilos, al generar una tabla de contenido esta no aparece ordenada sino que se muestra por orden de aparición dentro del texto, necesitaría una forma rápida de reordenar todo en un orden especifico según su estilo de párrafo.
Gracias!Te aconsejo que utilices el inglés en este foro... Con el traductor de google es relativamente fácil...
-
Como puedo crear un 4 cuenta icoud ?
e adquirido un iPhone 4 y cuando iba a acceder a icloud y voy a ingresar medice que e exedido el numero de cuentas gratis en este iphone como puedo solucionar eso por favor pronta respuesta
Escriba los títulos en un párrafo en las imágenes y darles un título * Estilo de párrafo.
Entonces, utilizar ese estilo de párrafo Leyenda cuando el índice docuemnt con una tabla de contenido.
Peter
* Sólo un nombre del estilo sensato no literalmente "Leyenda". -
Description of field from data element in table control
hello people !!! I need help !!
Im using a table control in my module pool program. I create this table through wizard, taking the fields from an internal table. This internal table was defined like this:
Table Control ; posting items
DATA: t_postitems TYPE stucture_fd OCCURS 0 WITH HEADER LINE.
Where <b>stucture_fd</b> is strucuture with the fields.
The problem that I have is that the titles of the fields does not take their description from the data element.
How I create this table control from the internal table and that takes the description from the data element? It is very important this, because this development will be used in different languages.
From already I am thanking for any answer.
Thank so much.
Best regards, Esther.Gracias, es mas facil para mi en espanol.
tables: tab_prb.
types: begin of <b>ti</b>.
include structure <b>tab_prb</b>.
types: end of <b>ti</b>.
data: <b>ti_b</b>kpf type table of ti,
<b>wa_bkpf</b> type ti.
Explicacion:
tab_prb, es una tabla transparent que contiene la estructura <b>struct_fd</b> que tiene los campos mas un flag para la seleccion de fila (flag).
Defini esta tabla transparente, para que los campos tomen su decripcion desde los elementos de datos.
Si creo el TB directo de la tabla tab_prb, no me permite seleccionar el campo flag como campo de seleccion, cuya funcion es marcar linea simple.
Si yo hago la definicion anterior en mi programa, cuando creo el TB, va todo bien, hasta <b>ti_bkpf</b> que seria mi tabla interna, pero cuando me pide el work area, le ingreso <b>wa_bkpf</b> me indica que no esta definido en mi programa o ne es una estrucutura.
Espero haber aclarado un poco mi respuesta.
Muchas gracias !!!
Esther.- -
hI.
I got a problem....
I'm uploading texts to SAP from .txt files, but what I need is to put that string line maximun of 1024 caracters in a table with a field of 132 caracters with out cutting the words.
Does some body have had this problem?
for example:
this is a line text without "enter"
La atención médica con regularidad está fuera de la programación económica familiar y en el momento que se presenta una alteración en la salud normalmente refleja un desequilibrio económico. MédicaLife es un seguro de gastos médicos diseñado para complementar los objetivos de protección, que le brinda la oportunidad de contar con un respaldo para enfrentar los gastos que deban realizarse por accidentes o enfermedades.Relación en donde se especifica el monto máximo que pagará MetLife por cada consulta médica o procedimiento quirúrgico ya sea ENFERMEDAD o ACCIDENTE. Tabulador diferente según el plan contratado y lugar donde éste se realice. Cantidad a cargo del asegurado, que se debe pagar en cada evento por Enfermedad o Accidente. Una vez rebasada esta cantidad, comienza la obligación de MetLife. Porcentaje a cargo del asegurado, que se aplica al monto total de gastos cubiertos por Enfermedad en cada reclamación inicial o complementaria, una vez descontado el deducible. En caso de Accidente no aplica el costos
And I need to do something like this
table x
field 1
La atención médica con regularidad está
fuera de la programación económica familiar
y en el momento que se presenta una
alteración en la salud normalmente refleja
un desequilibrio económico. MédicaLife es un
seguro de gastos médicos diseñado para complementar
los objetivos de protección, que le brinda la
oportunidad de contar con un respaldo para
enfrentar los gastos que deban realizarse por
accidentes o enfermedades.Relación en donde se
especifica el monto máximo que pagará MetLife por cada
consulta médica o procedimiento quirúrgico ya sea
ENFERMEDAD o ACCIDENTE. Tabulador diferente según el
plan contratado y lugar donde éste se realice.
Cantidad a cargo del asegurado, que se debe pagar en
cada evento por Enfermedad o Accidente. Una vez
rebasada esta cantidad, comienza la obligación de
MetLife. Porcentaje a cargo del asegurado, que se
aplica al monto total de gastos cubiertos por
Enfermedad en cada reclamación inicial o
complementaria, una vez descontado el deducible.
En caso de Accidente no aplica el costos.
pleeeeeaaaaaaaaaaseeeeeeeeeee heeeeeeeeeeelllllllpmeeeeeeeeeeee :o(hi adrian,
you can use this sample code.
REPORT zkun_file7 .
TABLES : zkunal1.
DATA : BEGIN OF tablex OCCURS 0,
field1(132) TYPE c,
END OF tablex.
DATA : BEGIN OF gv_itab OCCURS 0,
str(3000) TYPE c,
END OF gv_itab.
*data : begin of gv_itab occurs 0,
fname like zkunal1-fname,
lname like zkunal1-lname,
place like zkunal1-place,
end of gv_itab.
DATA : gv_file TYPE string.
*SELECTION SCREEN *
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
PARAMETERS: p_file LIKE ibipparms-path OBLIGATORY. " For file selection
SELECTION-SCREEN END OF BLOCK b1.
*AT SELECTION SCREEN *
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
CALL FUNCTION 'F4_FILENAME'
EXPORTING
program_name = syst-cprog
dynpro_number = syst-dynnr
FIELD_NAME = ' '
IMPORTING
file_name = p_file.
*START OF SELECTION * *
START-OF-SELECTION.
P_FILE is not compatible with the FM GUI_UPLOAD, so pass it to
GV_FILE.
gv_file = p_file.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = gv_file
filetype = 'ASC'
has_field_separator = '#'
HEADER_LENGTH = 0
READ_BY_LINE = 'X'
DAT_MODE = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
CHECK_BOM = ' '
IMPORTING
FILELENGTH =
HEADER =
TABLES
data_tab = gv_itab
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
data : x type i.
x = 0.
LOOP AT gv_itab.
tablex-field1 = gv_itab-str+x(132).
x = x + 132.
append tablex.
endloop.
Hope this helps you.
Regards,
Kunal. -
(OT) Eliminar tabla y crear tabla con datos
Hola Forero
Tengo la siguiente consulta. Necesito crear con un solo
botón o código, la
cual me permita eliminar una tabla, luego que la crea pero
con varios
registros
Alguien sabe como hacerlo?
Gracias de ante mano
Anuack.com.This is a multi-part message in MIME format.
------=_NextPart_000_000E_01C8B837.BBCB3CF0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Mysql - PHP
La idea es crear un bot=F3n que elimine una tabla. Luego la
crea con =
varios registros
Alguna sugerencia?
"Daniel Naranjo" <[email protected]> escribi=F3
en el mensaje =
news:[email protected]...
Daniel, la tabla es SQL de datos o una table de HTML?
Daniel Naranjo
Lo Ultimo Group, C.A.=20
(+58) 414 7962406 / 416 2917532 / 295 6117632=20
www.loultimoenlaWEB.com
www.loultimoenHosting.com
www.loultimoenViajes.com
"Anuack Technology de Colombia"
<[email protected]> escribi=F3 =
en el mensaje news:[email protected]...
Hola Forero
Tengo la siguiente consulta. Necesito crear con un solo
bot=F3n o =
c=F3digo, la=20
cual me permita eliminar una tabla, luego que la crea pero
con =
varios=20
registros
Alguien sabe como hacerlo?
Gracias de ante mano
Anuack.com.
------=_NextPart_000_000E_01C8B837.BBCB3CF0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0
Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D"MSHTML 6.00.2900.3132"
name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Mysql -
PHP</FONT></DIV>
<DIV><FONT face=3DArial
size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>La idea es crear
un bot=F3n que elimine =
una tabla.=20
Luego la crea con varios registros</FONT></DIV>
<DIV><FONT face=3DArial
size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Alguna
sugerencia?</FONT></DIV>
<DIV><FONT face=3DArial
size=3D2></FONT> </DIV>
<BLOCKQUOTE dir=3Dltr=20
style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT:
5px; =
BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV>"Daniel Naranjo" <<A=20
=
href=3D"mailto:[email protected]">[email protected]</A>>=
=20
escribi=F3 en el mensaje <A=20
=
href=3D"news:[email protected]">news:g0mvsp$4r7$1@forums=
.macromedia.com</A>...</DIV>
<DIV><FONT face=3DVerdana size=3D2>Daniel, la
tabla es SQL de datos o =
una table de=20
HTML?</FONT></DIV>
<DIV>
<P></P>
<P><FONT face=3D"Arial, Helvetica, sans-serif"
size=3D2><STRONG>Daniel =
Naranjo</STRONG><BR>Lo Ultimo Group, C.A.
</FONT><BR><FONT=20
face=3D"Arial, Helvetica, sans-serif" size=3D1>(+58) 414
7962406 / 416 =
2917532 /=20
295 6117632 <BR><U><A=20
=
href=3D"
R><U><A=20
=
href=3D"
U><BR><U><A=20
=
href=3D"
<BR></FONT></P></DIV>
<BLOCKQUOTE=20
style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT:
5px; =
BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV>"Anuack Technology de Colombia" <<A=20
=
href=3D"mailto:[email protected]">[email protected]</A>>
=
escribi=F3=20
en el mensaje <A=20
=
href=3D"news:[email protected]">news:g0fiam$33k$1@forums=
.macromedia.com</A>...</DIV>Hola=20
Forero<BR><BR>Tengo la siguiente consulta.
Necesito crear con un =
solo bot=F3n=20
o c=F3digo, la <BR>cual me permita eliminar una tabla,
luego que la =
crea pero=20
con varios <BR>registros<BR><BR>Alguien
sabe como =
hacerlo?<BR><BR>Gracias de=20
ante =
mano<BR><BR>Anuack.com.<BR><BR><BR></BLOCKQUOTE></BLOCKQUOTE></BODY></HTM=
L>
------=_NextPart_000_000E_01C8B837.BBCB3CF0-- -
I am asking for feedback on this design. Here is an example user story:
As a group admin on the website I want to be notified when a user in my group uploads a file to the group.
Easiest solution would be that in the code handling the upload, we just directly create an email message in there and send it. However, this seems like it isn't really the appropriate level of separation of concerns, so instead we are thinking to have a separate
worker process which does nothing but send notifications. So, the website in the upload code handles receiving the file, extracting some metadata from it (like filename) and writing this to the database. As soon as it is done handling the file upload it then
does two things: Writes the details of the notification to be sent (such as subject, filename, etc...) to a dedicated "notification" table and also creates a message in a queue which the notification sending worker process monitors. The entire sequence
is shown in the diagram below.
My questions are: Do you see any drawbacks in this design? Is there a better design? The team wants to use Azure Worker Roles, Queues and Table storage. Is it the right call to use these components or is this design unnecessarily complex? Quality attribute
requirements are that it is easy to code, easy to maintain, easy to debug at runtime, auditable (history is available of when notifications were sent, etc...), monitor-able. Any other quality attributes you think we should be designing for?
More info:
We are creating a cloud application (in Azure) in which there are at least 2 components. The first is the "source" component (for example a UI / website) in which some action happens or some condition is met that triggers a second component or "worker"
to perform some job. These jobs have details or metadata associated with them which we plan to store in Azure Table Storage. Here is the pattern we are considering:
Steps:
Condition for job met.
Source writes job details to table.
Source puts job in queue.
Asynchronously:
Worker accepts job from queue.
Worker Records DateTimeStarted in table.
Queue marks job marked as "in progress".
Worker performs job.
Worker updates table with details (including DateTimeCompleted).
Worker reports completion to queue.
Job deleted from queue.
Please comment and let me know if I have this right, or if there is some better pattern. For example sake, consider the work to be "sending a notification" such as an email whose template fields are filled from the "details" mentioned in
the pattern.Hi,
Thanks for your posting.
This development mode can exclude some errors, such as the file upload complete at the same time... from my experience, this is a good choice to achieve the goal.
Best Regards,
Jambor
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Error while create trigger on for nested table
I want to insert a record into a nested table.For this, I created a view for the table, which includes the nested table.It told me ORA-25015 cannot perform DML on this nested table view column.So I created a trigger for the nested table.However, it told me that ORA-25010 Invalid nested table column name in nested table clause.I think my nested table is valid, i don't konw why did it appear this kind of problem?
My table is
CREATE TABLE ENT
ID NUMBER(7) NOT NULL,
CREATE_DATE VARCHAR2(11 BYTE),
UPDATE_DATE VARCHAR2(11 BYTE),
DEPTS VARRAY_DEPT_SEQ
CREATE OR REPLACE
TYPE DEPT AS OBJECT
ID NUMBER(8),
ANCHOR VARCHAR2(20),
CREATE OR REPLACE
TYPE " VARRAY_DEPT_SEQ" as varray(930) of DEPT
CREATE OR REPLACE VIEW ENT_NESTED_VIEW
(ID, CREATE_DATE, UPDATE_DATE, DEPTS)
AS
select e.ID,cast(multiset(select r.id,r.anchor from ent z, table(z.depts) r where z.ID=e.ID )as varray_dept_seq)
FROM ENT e
Then when I created trigger;
CREATE OR REPLACE TRIGGER EMP.ENT_NESTED_TRI
INSTEAD OF INSERT
ON NESTED TABLE DEPTS OF EMP.ENT_NESTED_VIEW
REFERENCING NEW AS New OLD AS Old PARENT AS Parent
FOR EACH ROW
BEGIN
END ;
I met the problem: ORA-25010 Invalid nested table column name in nested table clause
Could you please tell me the reason
Thank you!
My insert SQL is:
insert into table(select depts from ent_nested_view where id=1856) values(varray_dept_seq(dept(255687,'AF58743')))
Message was edited by:
user589751Hi,TongucY
Compared with the "Referencing Clause with Nested Tables" part of this reference -
http://psoug.org/reference/instead_of_trigger.html, I found the answer of this
quesion. That is "CREATE OR REPLACE TYPE " VARRAY_DEPT_SEQ" as[b] varray(930) of
DEPT". It turns to be a varying array, not a nested table. It should be "CREATE OR
REPLACE TYPE " VARRAY_DEPT_SEQ" as table of DEPT". That is OK. Thank you very
much!
While there is an another question, if I create a varying array like" CREATE OR
REPLACE TYPE " VARRAY_DEPT_SEQ" as[b] varray(930) of DEPT " and I want to insert
a record into the varying array, which the record has been existed.The method that
create a view and a trigger seems not to be effective.
For instance,
There is a record in the table
ID:1020
CREATE_DATE:2005-10-20
UPDATE_DATE:2007-2-11
DETPS: ((10225,AMY))
I want to ask this record to be
ID:1020
CREATE_DATE:2005-10-20
UPDATE_DATE:2007-2-11
DETPS: ((10225,AMY),(10558,TOM))
How should I do?
Could you please help me?
Best regards.
Message was edited by:
user589751 -
Hi,
i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
But oracle always only uses a single table function instance :-(
whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
Could it be, that this is due to the fact, that my data are not
globally available, but only in the main thread data?
Can someone confirm, that it's not possible to start multiple parallel table functions
for selecting on SQL data type TABLE OF <object>storages?
Here's an example sqlplus program to show the issue:
-------------------- snip ---------------------------------------------
set serveroutput on;
drop table test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
begin
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
counter := counter + 1;
end loop;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
begin
for i in 1..10000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
select test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line('worker thread''s sid list:');
for i in sids.first..sids.last loop
dbms_output.put_line('sid #' || sids(i));
end loop;
dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
select test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line('worker thread''s sid list:');
for i in sids.first..sids.last loop
dbms_output.put_line('sid #' || sids(i));
end loop;
end;
-------------------- snap ---------------------------------------------
Here's the output:
-------------------- snip ---------------------------------------------
copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '98' ): enter
test_pkg.TF( sid => '98' ): exit, piped #10000 records
... saved #10000 records
worker thread's sid list:
sid #98 -- ONLY A SINGLE SID HERE!
copy physical 'test_table' to 'mylist2' by streaming via table function:
... saved #10000 records
worker thread's sid list:
sid #128 -- A LIST OF SIDS HERE!
sid #141
sid #85
sid #125
sid #254
sid #101
sid #124
sid #109
sid #142
sid #92
PL/SQL procedure successfully completed.
-------------------- snap ---------------------------------------------
I posted it to newsgroup comp.databases.oracle.server.
(summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
But i didn't get a response.
There i also wrote some background information about my application:
-------------------- snip ---------------------------------------------
My application has a #2 steps/stages data selection.
A 1st select for minimal context base data
- mainly to evaluate for due driving data records.
And a 2nd select for all the "real" data to process a context
(joining much more other tables here, which i don't want to do for non-due records).
So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
That worked.
However there was a flaw:
Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
That forced me to use "cluster curStage2 by (theKey)".
Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
(so it needs to have #x records of stage #2 together).
This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
Then i don't need the "cluster curStage2 by (theKey)" anymore!
But: i also need to do an update of the primary driving data!
So the stage #1 select is a 'select ... for update ...'.
But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
So i have to do my stage #1 selection in two steps:
1. 'select for update' by main session and collect result in SQL collection.
2. pass collected data to parallel table functions
And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
As a work-around
- if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
i need to select again on the base tables - driven by the SQL collection data.
But before i do so, i wanted to verify, if it's really not possible.
Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
-------------------- snap ---------------------------------------------
- many thanks!
rgds,
FrankHi,
i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
But oracle always only uses a single table function instance :-(
whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
Could it be, that this is due to the fact, that my data are not
globally available, but only in the main thread data?
Can someone confirm, that it's not possible to start multiple parallel table functions
for selecting on SQL data type TABLE OF <object>storages?
Here's an example sqlplus program to show the issue:
-------------------- snip ---------------------------------------------
set serveroutput on;
drop table test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
begin
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
counter := counter + 1;
end loop;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
begin
for i in 1..10000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
select test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line('worker thread''s sid list:');
for i in sids.first..sids.last loop
dbms_output.put_line('sid #' || sids(i));
end loop;
dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
select test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line('worker thread''s sid list:');
for i in sids.first..sids.last loop
dbms_output.put_line('sid #' || sids(i));
end loop;
end;
-------------------- snap ---------------------------------------------
Here's the output:
-------------------- snip ---------------------------------------------
copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '98' ): enter
test_pkg.TF( sid => '98' ): exit, piped #10000 records
... saved #10000 records
worker thread's sid list:
sid #98 -- ONLY A SINGLE SID HERE!
copy physical 'test_table' to 'mylist2' by streaming via table function:
... saved #10000 records
worker thread's sid list:
sid #128 -- A LIST OF SIDS HERE!
sid #141
sid #85
sid #125
sid #254
sid #101
sid #124
sid #109
sid #142
sid #92
PL/SQL procedure successfully completed.
-------------------- snap ---------------------------------------------
I posted it to newsgroup comp.databases.oracle.server.
(summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
But i didn't get a response.
There i also wrote some background information about my application:
-------------------- snip ---------------------------------------------
My application has a #2 steps/stages data selection.
A 1st select for minimal context base data
- mainly to evaluate for due driving data records.
And a 2nd select for all the "real" data to process a context
(joining much more other tables here, which i don't want to do for non-due records).
So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
That worked.
However there was a flaw:
Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
That forced me to use "cluster curStage2 by (theKey)".
Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
(so it needs to have #x records of stage #2 together).
This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
Then i don't need the "cluster curStage2 by (theKey)" anymore!
But: i also need to do an update of the primary driving data!
So the stage #1 select is a 'select ... for update ...'.
But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
So i have to do my stage #1 selection in two steps:
1. 'select for update' by main session and collect result in SQL collection.
2. pass collected data to parallel table functions
And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
As a work-around
- if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
i need to select again on the base tables - driven by the SQL collection data.
But before i do so, i wanted to verify, if it's really not possible.
Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
-------------------- snap ---------------------------------------------
- many thanks!
rgds,
Frank -
How to Copy Existing values & make new entries in ISU Database tables TE609
Hi friends,
In SAP-ISU for a database table TE609(MR Reasons (Values))
how can I maintain the new MR Reasons, when I press F4 help, in the EL28 standard Transaction those new MR reasons must reflect in the Selection-screen.
There is one Maintenance view TE609T, I maintained the entries in it, but those entries are not reflecting in EL28 Tcode when I pressed F4 Help.
Please guide me in this...
Thanks in Advance
GaneshHello Ganesh,
You are not dealing with the same Data Element (Meaning of the field).
In EL28, it's ABLESGRE - Meter reading reasons for single entry
On table TE609, it's ABLESGR - Meter reading reason
If you take a look at the domain of ABLESGRE, ABLESGRE , you will see that there are FIXED VALUES, that's where come your values.
For single entry (EL28), you can only use those values( Fixed Values from Domain of the Data Element).
Maybe what you need is to find another transaction to enter Meter Readings and not SINGLE ENTRY.
Don't forget to give me points if it's helping you.
Kind Regards,
Artur Moreira -
Hi Exports
How to export the data in ALV as Pivot Table.
Pls help me on this.
Thanks in advance.
Regards
RajaramHi Raja,
The following explanation helps u in understanding the process of exporting the data.
Please find the following SAP Note 700206.
Symptom
Excel export gives error "List object is too large to be exported" when a large list object is exported. This message comes only on usage of function module XXL_FULL_API for Export to Excel functionality.
Other terms
Excel Export, XXL Export, i_oi_spreadsheet, Size Limit,
Reason and Prerequisites
Older technology used and old MS Office release has limitaion of exporting only 16384 lines of data. This change is valid only for the SAP R3 release higher than 4.6C and MS Office release higher than Office 97.
On The pop-up dialog "Export list object to XXL" with three options
"Excel SAP macros", "Table" and "Pivot table", this change is valid only
for "Table" and "Pivot table" option and not for "Excel SAP macros".
Choice of "Excel SAP macros" option is not linked with this change for any R3 relaese.
This correction is done in function module XXL_FULL_API and is only valid when XXL_FULL_API is called for Exporting the list to Excel.
Solution
Increased the Max Size Limit to 65536 rows and 256 columns. For Table and Pivot Table option of Excel Export. This change is available for office integration technology only and no adaption is possible to old SAP
Macros. Import the relevant ABAP patch.
NOTE: If you Export a Large list object there could be a performance problem and the same will be dirctly proportional to the amount of DATA (size of list object) Exported. The Maximum limits with XXL Export: Number of Rows - 65536. Number of Columns - 256. Size of individual cell - 255 Characters.
Reward if helpful.
Thankyou,
Regards.
Maybe you are looking for
-
Can anyone help me with this problem?
-
SAX error in xsd parsing (was: Application only returns "Log is disabled.")
Ok, it seems the underlying error of the "Log is disabled" is with parsing xml. I paste in the relevant bits of the console output of Weblogic 10.3.2.0 below (slightly modified the URL). I made a small testcase in Eclipse reading the same xsd from th
-
Load-balancing using ServerIrons or NetApp Netcaches
Dear all, From a cursory search, this one has been asked loads of times, but I can't find an answer.... We're adding a Weblogic cluster into a resilient environment which has Netcache boxes doing reve
-
What does Verizon charge for dry loop in NYC?
Right now I have dsl with a land line. I want to drop the land line and go with dry loop. What does Verizon charge for dry loop in NYC? Thanks
-
HT203167 All my music has disappeared from my iTunes, what do I do?
I don't know why but somehow all my music has disappeared from my iTunes. I store all my music files on external hard drives. I don't know what to do to get it all back onto iTunes (other than individually adding each song)