Spry:state and dynamic accordion

When I use a dynamic accordion (i.e. populated by a dataSet),
the accordion functionality disappears. Here is my code. Am I doing
something wrong, or is this just a bug?
Thanks
Andy
<div spry:region="dsOrders">
<div spry:state="loading">Data is
loading...</div>
<div spry:state="error">Error loading
data...</div>
<div spry:state="ready">
<div id="orders" class="Accordion">
<div class="AccordionPanel" spry:repeat="dsOrders">
<div class="AccordionPanelTab"><h3
spry:content="{ORDERNUM}"></h3></div>
<div class="AccordionPanelContent"><span
spry:content="{ORDERDATE}"></span></div>
</div>
</div>
</div>
</div>

This might also be helpful to some. Not only do we have
states in SPRY... but events! Here is how I did it on a site that
has a dataset called dsOpportunities. The following HTML/Javascript
was added to the head tags of the HTML. Then the on <body
onload="afterReady()" > was used to make sure it was run.
Remember that your section you are hiding needs to have a class
with the same senario. (Or if it is unique you could do something
simular with ID rather than class.
<script>
function afterReady() {
$('pageContent').style.visibility = 'hidden';
var iSPRY = new Object;
iSPRY.onPostLoad = function(dataSet,data) {
$('pageContent').style.visibility = 'visible';
dsOpportunities.addObserver(iSPRY);
</script>
<style>
.pageContent {
visibility: hidden;
</style>

Similar Messages

  • Spry:Content and Dynamic Spry Data

    Is there a way to get the spry:content attribute to work well
    with dynamic data generated from PHP/MySQL? I have read the
    progressive enhancement article and I am totally lost on it.
    The provided examples in the documentation deal with static
    data but there are so example of using spry:content and dynamic
    data.
    Can anyone help?
    Thanks

    Hi Arnout
    These are the urls:
    http://www.grafikkaos.co.uk/pages/front/test_home.php
    - this one displays the spry:content properly, but in the source,
    it does not show the 5 articles.
    http://www.grafikkaos.co.uk/pages/front/test_home_2.php
    - I applied a PHP repeat region here. In the source, it shows 5
    articles being shown but on page view, each title and date is
    repeated 5 times.
    Any ideas?
    Thanks

  • Spry:State and PagedView

    Dear Member,
    hope that somebody can help me this time..
    I am using spry, and a lot of trick to make it work.
    The spry:state="loading" options don't wanna work for me
    it's only work if i don't use the PagedView...
    Here the simple code
    I have a simple XMLdataset:
    var dsmio = new Spry.Data.XMLDataSet("Xml/filexxx.asp", "Clienti/Cliente", {sortOnLoad: "RagioneSociale", sortOrderOnLoad: "ascending", useCache: false});
    and a pagination on it:
    var paginazione = new Spry.Data.PagedView( dsmio ,{ pageSize: 50 });
    finally the code to show my data:
    <div spry:region="paginazione">
    <div spry:state="loading">Loading Data - Please stand by...</div>
    <div spry:state="error">OPS, something went wrong!</div>
    <div spry:state="ready">
      <table width="100%">
        <tr>
          <th width="25%" height="20" align="left" bgcolor="#CCCCCC" spry:sort="RagioneSociale">RagioneSociale</th>
          <th width="25%" height="20" align="left" bgcolor="#CCCCCC" spry:sort="Email">Email</th>
          <th width="25%" height="20" align="left" bgcolor="#CCCCCC" spry:sort="Telefono">Telefono</th>
          <th width="25%" height="20" align="left" bgcolor="#CCCCCC" spry:sort="PersonaRiferimento">PersonaRiferimento</th>
        </tr>
        <tr spry:repeat="paginazione" class="{ds_EvenOddRow}">
         <td width="25%" height="20">{RagioneSociale}</td>
          <td width="25%" height="20">{Email}</td>
          <td width="25%" height="20">{Telefono}</td>
          <td width="25%" height="20">{PersonaRiferimento}</td>
        </tr>
      </table>
    </div>
    Hope some Guru can help me

    Hi guys!
    This is easy to solve:
    You have to work in two different regions!
    <div spry:region="dsmio">
      --- Put the loader here...
    </div>
    <div spry:region="paginazione">
    content
    </div>
    You can hide the "layout" of the content by :
    <div id="content" spry:if="'{ds_PageTotalItemCount}' != ' ' ">
    content...
    </div>
    ds_PageTotalItemCount is empty during loading.....
    Have fun & keep coding

  • Programmatic openPanels and Dynamic Accordion Content

    I have been trying to add links to programmatically open
    panels in my accordion, which is successfully populated with an XML
    dataset. It doesn't seem to work, (tried using both "simple
    navigation" and "panel index" links).
    This does work when the data in the accordion is hard-coded.
    I created a test page directly from the AccordionSample2 page
    in the samples with Spry 1.5 (all of my files are 1.5).
    I added the programmatic links to open each of the 2 methods
    of populating the accordion on that page, and they don't seem to
    work, either.
    Test Page is here:
    www.imagicdigital.com/spry/AccordionSample2test.html
    I thought perhaps the dynamically created accordion is using
    a different id, so that's why the programmatic linking doesn't
    work, but when I viewed the generated markup with the debugger, it
    appears to use the given id.
    Any thoughts or help? Thanks in advance.

    http://labs.adobe.com/technologies/spry/samples/utils/URLUtilsSample.html
    that might help u out. the bottom example is a tabbed widget
    that gets activated by clicking a link.
    and this;
    http://labs.adobe.com/technologies/spry/articles/data_api/apis/collapsible_panel.html#meth ods
    might help u out to.
    also, u might want to add the
    <script type="text/javascript">
    var a1 = new Spry.Widget.Accordion("Acc2");
    </script>
    part, becouse that is missing, and its giving errors. and if
    u have firebug installed u will see those errors on firefox.
    getfirefox.com <-- firefox
    getfirebug.com <-- firebug

  • DDL statements and dynamic  sql  in stored procedure

    I created a stored procedure to create and drop tables, using dynamic sql.
    When I try to do the inserts using dynamic sql, i.e
    v_string := 'INSERT statement';
    EXECUTE IMMEDIATE v_string;
    I get the following error message:
    ERROR at line 1:
    ORA-00942: table or view does not exist
    ORA-06512: at line 63
    Line 63 happens to be the line that the EXECUTE IMMEDIATE v_string; statement is in.
    I am able to describe the table that the inserts are being made into, so I know that the table exists.
    Any idea why I'm getting this error message would be appreciated.

    Yes I do and I have been able to create other tables using dynamic sql.
    The table that I am having problems with SELECTs data from another table to get its column values; within the SELECT statement, the CAST function is used:
    ie. CAST(CASE SUBSTR(CAST(E_MOD AS VARCHAR(7)),2,3)
    WHEN 'AAA' THEN 'A55'
    ELSE ............
    I get the following error message:
    ERROR at line 18: (this line starts the CAST statement)
    ORA-06550: line 18, column 13:
    PLS-00103: Encountered the symbol "AAA" when expecting one of the following:
    . ( * @ % & = - + ; < / > at in is mod not rem return
    returning <an exponent (**)> <> or != or ~= >= <= <> and or
    like between into using || bulk
    When I remove the quotes or add another single quote, the same error cascades to 'A55'.
    After doing the same for the next error, I get the error message below:
    ERROR at line 1: (this line has the EXECUTE IMMEDIATE statement)
    ORA-00936: missing expression
    ORA-06512: at line 6
    Any idea what the problem could be?
    Also is there another way to have DDL statements as stored procedures other than using dynamic sql or the DBMS_SQL package?

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • Spry Tabbed panels + Progressive Enhancement and Dynamic Loading of Content With Spry

    Is there any way to combine tabbed panels together with "Progressive Enhancement and Dynamic Loading of Content With Spry"?
    Visit: http://labs.adobe.com/technologies/spry/articles/best_practices/progressive_enhancement.ht ml#updatecontent
    And click on the "Using Spry.Utils.updateContent()"
    The 3rd example shows how to use a fade transition whenever the content changes.
    I already have tabbed panels. My menu contains buttons (on tabs) and my Content div contains the panels.
    Tabs code;
    <ul class="TabbedPanelsTabGroup">
              <li class="TabbedPanelsTab">
                   <table class="Button"  >
                        <tr>
                        <td style="padding-right:0px" title ="Home">
                        <a href="javascript:TabbedPanels1.showPanel(1);" title="Home" style="background-image:url(/Buttons/Home.png);width:172px;height:75px;display:block;"><br/></a>
                        </td>
                        </tr>
                   </table>
              </li>
    etc
    etc
    etc
    and the panel code:
    <div class="TabbedPanelsContent" id="Home">
         CONTENT
    </div>
    I hoped i can use the example code from the link into my tabbed panels.
    I thought this code:
    onclick="FadeAndUpdateContent('event', 'data/AquoThonFrag.html'); return false;"
    could be added to the tab code like this:
    <a href="javascript:TabbedPanels1.showPanel(1);" onclick="FadeAndUpdateContent('event', 'data/AquoThonFrag.html'); return false;" title="Home" style="background-image:url(/Buttons/Home.png);width:172px;height:75px;display:block;"><br/></a>
    But the content doesnt fade...
    I know i need to change the header etc.
    The following is from the link:
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml" xmlns:spry="http://ns.adobe.com/spry">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Aquo Events</title>
    <script src="../../../includes/SpryEffects.js" type="text/javascript"></script>
    <script src="../../../includes/SpryData.js" type="text/javascript"></script>
    <script type="text/javascript">
    <!--
    function FadeAndUpdateContent(ele, url)
    try {
         Spry.Effect.DoFade(ele,{ duration: 500, from: 100, to: 0, finish: function() {
              Spry.Utils.updateContent(ele, url, function() {
                        Spry.Effect.DoFade(ele,{ duration: 500, from: 0, to: 100 });
    }catch(e){ alert(e); }
    -->
    </script>
    <style type="text/css">
    /* IE HACK to prevent bad rendering when fading. */
    #event { background-color: white; }
    </style>
    </head>
    So i changed my header etc, put the SpryEffects.js and SpryData.js into position and nothing changed...
    Is there a way to keep my tabbed panel (or change as less as possible) and let
    A. The fade work
    B. The loading work.
    The problem now is that it loads all pages instead of only the home. Therefore i wanted this Progressive Enhancement.
    And the fading part is just because its nice...

    It doesnt show in the post but off course i changed this link;
    "data/AquoThonFrag.html"
    into;
    "javascript:TabbedPanels1.showPanel(1);"
    I must say i dont know if this even works...

  • Gallery and spry:state

    How do I make the spry:state="loading" work with your image
    gallery? I can get it to work just fine when its loading dataset
    but when you click a thumbnail it doesnt work the same. would the
    spry:state need to be added to the gallery.js since that has the
    function that handles the changing of the image?

    Hi,
    The spry:state="loading" attribute can be used only inside a
    spry:region or spry:detailregion - it's content is diplayed while
    the
    markup gets updated, but this doesn't include the time all
    the resources inside that markup are downloaded (for example, if
    there are images inside that region, when the spry:region gets
    updated, the "loading" content is displayed only for the time the
    markup containing <img src=".."> gets inside the page.
    Loading is not displayed while the image is actually downloaded by
    the browser).
    For the gallery demo, the markup for the big image is not
    inside a spry:region because the url for the image is set from
    custom functions inside gallery.js that uses data from the existing
    datasets. It's done this way because effects are added when the
    image is displayed.
    regards,
    Dragos

  • Spry:state  unexpected results

    I am using Spry/AJAX to populate a page, but whenever the
    page loads you see the dynamic content tags in the form fields and
    then it reloads with the data in the form fields. Is there any way
    to make it not show the dynamic content tags and just show the form
    once the data is in the form fields? Go to the following link to
    see what I mean:
    http://www.homesandagents.com/Members/contact_info.asp?userID=10000&sessionID=&contactID=1 0064
    I tried using spry:state="loading", but it still shows the
    form with the dynamic tags first (ie. {last_name}), then the
    loading state, and then finally the ready state.

    I'm not seeing the data references when I load your page. But
    that may be due to the fact that it looks like you've added the
    SpryHiddenRegion class to your region.
    --== Kin ==--

  • Insert into using a select and dynamic sql

    Hi,
    I've got hopefully easy question. I have a procedure that updates 3 tables with 3 different update statements. The procedure goes through and updates through ranges I pass in. I am hoping to create another table which will pass in those updates as an insert statement and append the data on to the existing data.
    I am thinking of using dynamic sql, but I am sure there is an easy way to do it using PL/SQL as well. I have pasted the procedure below, and what I'm thinking would be a good way to do it. Below I have pasted my procedure and the bottom is the insert statement I want to use. I am faily sure I can do it using dynamic SQL, but I am not familiar with the syntax.
    CREATE OR REPLACE PROCEDURE ACTIVATE_PHONE_CARDS (min_login in VARCHAR2, max_login in VARCHAR2, vperc in VARCHAR2) IS
    BEGIN
    UPDATE service_t SET status = 10100
    WHERE poid_id0 in
    (SELECT poid_id0 FROM service_t
    WHERE poid_type='/service/telephony'
    AND login >= min_login AND login <= max_login);
    DBMS_OUTPUT.put_line( 'Service Status:' || sql%rowcount);
    UPDATE account_t SET status = 10100
    WHERE poid_id0 IN
    (SELECT account_obj_id0 FROM service_t
    WHERE poid_type = '/service/telephony'
    AND login >= min_login AND login <= max_login);
    DBMS_OUTPUT.put_line( 'Account Status:' || sql%rowcount);
    UPDATE account_nameinfo_t SET title=Initcap(vperc)
    WHERE obj_id0 IN
    (SELECT account_obj_id0 FROM service_t
    WHERE poid_type='/service/telephony'
    AND login >=min_login AND login <= max_login);
    DBMS_OUTPUT.put_line('Job Title:' || sql%rowcount);
    INSERT INTO phone_card_activation values which = 'select a.status, s.status, s.login, to_char(d.sysdate,DD-MON-YYYY), ani.title
    from account_t a, service_t s, account_nameinfo_t ani, dual d
    where service_t.login between service_t.min_login and service_t.max_login
    and ani.for_key=a.pri_key
    and s.for_key=a.pri_key;'
    END;
    Thanks for any advice, and have a good weekend.
    Geordie

    Correct my if I am wrong but aren't these equal?
    UPDATE service_t SET status = 10100
    WHERE poid_id0 in
    (SELECT poid_id0 FROM service_t
    WHERE poid_type='/service/telephony'
    AND login >= min_login AND login <= max_login);
    (update all the records where there id is in the sub-query that meet the WHERE Clause)
    AND
    UPDATE service_t SET status = 10100
    WHERE poid_type='/service/telephony'
    AND login >= min_login AND login <= max_login);
    (update all the records that meet the WHERE Clause)
    This should equate to the same record set, in which case the second update would be quicker without the sub-query.

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Ref cursors and dynamic sql..

    I want to be able to use a fuction that will dynamically create a SQL statement and then open a cursor based on that SQL statement and return a ref to that cursor. To achieve that, I am trying to build the sql statement in a varchar2 variable and using that variable to open the ref cursor as in,
    open l_stmt for refcurType;
    where refcurType is a strong ref cursor. I am unable to do so because I get an error indication that I can not use strong ref cursor type. But, if I can not use a strong ref cursor, I will not be able to use it to build the report based on the ref cursor because Reports 9i requires strong ref cursors to be used. Does that mean I can not use dynamic sql with Reports 9i ref cursors? Else, how I can do that? Any documentation available?

    Philipp,
    Thank you for your reply. My requirement is that, sometimes I need to construct a whole query based on some input, and sometimes not. But the output record set would be same and the layout would be more or less same. I thought ref cursor would be ideal. Ofcourse, I could do this without dynamic SQL by writing the SQL multiple times if needed. But, I think dynamic SQL is a proper candidate for this case. Your suggestion to use lexical variable is indeed a good alternative. In effect, if needed, I could generate an entire SQL statement and place in some place holder (like &stmt) and use it as a static SQL query in my data model. In that case, why would one ever need ref cursor in reports? Is one more efficient over the other? My guess is, in the lexical variable case, part of the processing (like parsing) is done on the app server while in a function based ref cursor, the entire process takes place in the DB server and there is probably a better chance for re-use(?)
    Thanks,
    Murali.

  • Bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • BCP and dynamic SQL

    Hello All,
    Been looking into this for a couple of days, and I keep hitting brick walls, so I'm hoping someone can offer me a bit of inspiration. What I'm trying to do is write a stored procedure that lets the user specify a list of tables, and an output directory, and the SP creates a series of BCP statements that export these tables to comma delimited files.
    This wouldn't be too hard, but I need to output the field headings in the first row of the table (and use quotes as text qualifiers). I'm doing this by looping round sys.columns, pulling out all the fieldnames, creating two select statements, and UNION ALL-ing them together. e.g.......
    select 'FIELD1','FIELD2','FIELD3','FIELD4'
    union all
    select field1,field2,field3,field4 from tablename
    It all works fine until you try it on a table with a lot of columns. Although you can build a big SQL statement in an NVARCHAR(MAX), BCP only appears to read the first 4000 characters of it, so it fails.
    To get round this, I've moved all of the code that builds the big SQL statement to its own stored procedure (i.e. you pass the tablename, and it returns the table with the field names in the first row). Then, I can just call this new SP in my BCP statement, with a couple of parameters. 
    The problem I'm getting is BCP is complaining saying '[Microsoft][SQL Native Client]BCP host-files must contain at least one column'.  I'm setting no count off, and there are no print statements, so I'm assuming this is because the data is getting returned via an exec sp_executesq (although this is a guess). I can't think of a way round this though, as the SQL need to be dynamic.
    alter PROCEDURE [dbo].[sp_QBMultiFileExportGetData]
    @tablename varchar(100),
    @dbname varchar(100)
    AS
    BEGIN
    declare @Execstring as nvarchar(MAX)
    declare @currentfieldname as varchar(100)
    declare @selectlist as varchar(8000)
    declare @fieldnamelist as varchar(8000)
    declare @colnames table
    columnname varchar(100)
    begin
    set nocount on
    set @execstring='select COLUMN_NAME '+
    'from ' + @dbname + '.INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = ''' + @tablename + ''''
    insert into @colnames(columnname)
    exec sp_executesql @execstring
    set @selectlist=''
    set @fieldnamelist=''
    --Loop through fieldnames, and build two strings
    --One for outputting fieldnames and one for selecting the actual data
    while exists(select * from @colnames)
    begin
    select top 1 @currentfieldname=columnname from @colnames
    set @selectlist=@selectlist + 'quotename(['+ @currentfieldname + '],char(34)),'
    set @fieldnamelist=@fieldnamelist + '''' + @currentfieldname + ''' [' +@currentfieldname + '],'
    delete from @colnames where columnname=@currentfieldname
    end
    --remove last quote
    set @selectlist=substring(@selectlist,1,len(@selectlist)-1)
    set @fieldnamelist=substring(@fieldnamelist,1,len(@fieldnamelist)-1)
    --Built string to execute, with fieldnames, and select fields
    set @execstring='select ' + @fieldnamelist  + ' union all select ' + @selectlist + ' from ' + @dbname + '..'  + @tablename
    return exec sp_executesql @execstring
    end
    END
    this returns exactly what I want, but when I try to use it in a BCP statement, I get the error....
    i.e.
    EXEC master..xp_cmdshell 'bcp "exec QCDev.dbo.sp_QBMultiFileExportGetData ''tablename'',''dbname''" queryout C:\\outputfile.txt -T -t","'
    Error = [Microsoft][SQL Native Client]BCP host-files must contain at least one column
    Anyone ever tried this before?

    Hi Guys,
    Thanks for the suggestions. I had been trying to avoid temp tables (don't really like them), but I think eventually, they were the only way to go. Unfortunately, this opened a whole can of scoping worms, and after a couple of hours, its all given me a right headache. However, the good news is I've finally got it working as I wanted.
    I was finding I was having issues using temp tables, as the tables being used were dynamic, so I would have to create them in a dynamic SQL string, and they weren't propagating upwards from child to parent. I seemed to be getting the same problem using global temporary tables too, although I'm not sure why, as they should have worked They seemed to be out of scope by the time the SP that was calling my sp_QBMultiFileExportGetData tried to output the data. This might possibly have been because BCP wasn't seeing the same scope, but I've not tested it fully (and its very possible I was making a mistake).
    The solution was to abandon sp_QBMultiFileExportGetData, and merge the code back into the calling script. However, rather than trying to pass an enormous SQL string to bcp, running it separately with sp_executesql, and dumping the results in a global temp table. Then let bcp just call a 'select * from temptable', to avoid the select statement getting too long. Its not the most elegant solution, but it seems to work fine.
    ALTER PROCEDURE [dbo].[sp_QBMultiFileExport]
    -- Add the parameters for the stored procedure here
    @tablenames varchar(1000), --list of tables to be exported
    @outputpath varchar(1000), --output path ***AS SEEN BY THE SERVER, NOT THE CLIENT***
    @servername varchar(100), --Server where data resides
    @dbname varchar(100), --database name
    @delimiter varchar(1) --output delimiter
    AS
    BEGIN
    declare @Execstring as nvarchar(max)
    declare @currenttable as varchar(100)
    declare @colnames table
    columnname varchar(100)
    declare @currentfieldname as varchar(100)
    declare @selectlist as varchar(max)
    declare @fieldnamelist as varchar(max)
    --Get rid of CRLFs in the tablenames parameter
    set @tablenames=replace(@tablenames,char(10),'')
    set @tablenames=replace(@tablenames,char(13),'')
    --add extra comma to the end of the list (needed later for consistency)
    set @tablenames=@tablenames+','
    --Get first table in the list
    set @currenttable=substring(@tablenames,1,charindex(',',@tablenames)-1)
    while @tablenames<>''
    begin
    --Get a list of fieldnames from syscols
    insert into @colnames(columnname)
    select COLUMN_NAME
    from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @currenttable
    set @selectlist=''
    set @fieldnamelist=''
    while exists(select * from @colnames)
    begin
    --get first column name
    select top 1 @currentfieldname=columnname from @colnames
    --add to select statement lists
    set @selectlist=@selectlist + 'quotename(['+ @currentfieldname + '],char(34)),'
    set @fieldnamelist=@fieldnamelist + '''' + @currentfieldname + ''' [' +@currentfieldname + '],'
    --remove column from temptable
    delete from @colnames where columnname=@currentfieldname
    end
    --remove last quote from field lists
    set @selectlist=substring(@selectlist,1,len(@selectlist)-1)
    set @fieldnamelist=substring(@fieldnamelist,1,len(@fieldnamelist)-1)
    --check for temp table, and drop if necessary
    IF object_id('tempdb..##MultiFileExportTempTable') IS NOT NULL
    BEGIN
    DROP TABLE ##MultiFileExportTempTable
    END
    --Build list of fieldnames, and select list, unioned together
    --and put the results in temptable
    set @execstring='select ' + @fieldnamelist  + ' into ##MultiFileExportTempTable union all select ' + @selectlist + ' from ' + @dbname + '..'  + @currenttable
    exec sp_executesql @execstring
    --get BCP to pull data back from ##temptable, and dump in file
    set @execstring='EXEC master..xp_cmdshell ''bcp "select * from ##MultiFileExportTempTable" queryout ' + @outputpath + '\' + @currenttable + '.txt' + ' -c -T -t"' + @delimiter + '"'''
    exec sp_executesql @execstring
    --drop tablename from list
    set @tablenames=replace(@tablenames,@currenttable + ',','')
    --if tablenames list is not empty, get the next one
    if @tablenames<>''
    set @currenttable=substring(@tablenames,1,charindex(',',@tablenames)-1)
    else
    set @currenttable=''
    end
    IF object_id('tempdb..##MultiFileExportTempTable') IS NOT NULL
    BEGIN
    DROP TABLE ##MultiFileExportTempTable
    END
    END
    So, you call this with...
    exec dbo.[sp_QBMultiFileExport] 'table1,table2,table3',filepath,servername,dbname,delimiter
    ...and it creates delimited files called table1.txt, table2.txt and table3.txt in the specified folder, with field headings and text qualifiers.
    Many thanks for all your suggestions

  • Bind Variable in SELECT statement and get the value  in PL/SQL block

    Hi All,
    I would like  pass bind variable in SELECT statement and get the value of the column in Dynamic SQL
    Please seee below
    I want to get the below value
    Expected result:
    select  distinct empno ,pr.dept   from emp pr, dept ps where   ps.dept like '%IT'  and pr.empno =100
    100, HR
    select  distinct ename ,pr.dept   from emp pr, dept ps where   ps.dept like '%IT'  and pr.empno =100
    TEST, HR
    select  distinct loc ,pr.dept   from emp pr, dept ps where   ps.dept like '%IT'  and pr.empno =100
    NYC, HR
    Using the below block I am getting column names only not the value of the column. I need to pass that value(TEST,NYC..) into l_col_val variable
    Please suggest
    ----- TABLE LIST
    CREATE TABLE EMP(
    EMPNO NUMBER,
    ENAME VARCHAR2(255),
    DEPT VARCHAR2(255),
    LOC    VARCHAR2(255)
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (100,'TEST','HR','NYC');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (200,'TEST1','IT','NYC');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (300,'TEST2','MR','NYC');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (400,'TEST3','HR','DTR');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (500,'TEST4','HR','DAL');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (600,'TEST5','IT','ATL');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (700,'TEST6','IT','BOS');
    INSERT INTO EMP (EMPNO,ENAME,DEPT,LOC) VALUES (800,'TEST7','HR','NYC');
    COMMIT;
    CREATE TABLE COLUMNAMES(
    COLUMNAME VARCHAR2(255)
    INSERT INTO COLUMNAMES(COLUMNAME) VALUES ('EMPNO');
    INSERT INTO COLUMNAMES(COLUMNAME) VALUES ('ENAME');
    INSERT INTO COLUMNAMES(COLUMNAME) VALUES ('DEPT');
    INSERT INTO COLUMNAMES(COLUMNAME) VALUES ('LOC');
    COMMIT;
    CREATE TABLE DEPT(
    DEPT VARCHAR2(255),
    DNAME VARCHAR2(255)
    INSERT INTO DEPT(DEPT,DNAME) VALUES ('IT','INFORMATION TECH');
    INSERT INTO DEPT(DEPT,DNAME) VALUES ('HR','HUMAN RESOURCE');
    INSERT INTO DEPT(DEPT,DNAME) VALUES ('MR','MARKETING');
    INSERT INTO DEPT(DEPT,DNAME) VALUES ('IT','INFORMATION TECH');
    COMMIT;
    PL/SQL BLOCK
    DECLARE
      TYPE EMPCurTyp  IS REF CURSOR;
      v_EMP_cursor    EMPCurTyp;
      l_col_val           EMP.ENAME%type;
      l_ENAME_val       EMP.ENAME%type;
    l_col_ddl varchar2(4000);
    l_col_name varchar2(60);
    l_tab_name varchar2(60);
    l_empno number ;
    b_l_col_name VARCHAR2(255);
    b_l_empno NUMBER;
    begin
    for rec00 in (
    select EMPNO aa from  EMP
    loop
    l_empno := rec00.aa;
    for rec in (select COLUMNAME as column_name  from  columnames
    loop
    l_col_name := rec.column_name;
    begin
      l_col_val :=null;
       l_col_ddl := 'select  distinct :b_l_col_name ,pr.dept ' ||'  from emp pr, dept ps where   ps.dept like ''%IT'' '||' and pr.empno =:b_l_empno';
       dbms_output.put_line('DDL ...'||l_col_ddl);
       OPEN v_EMP_cursor FOR l_col_ddl USING l_col_name, l_empno;
    LOOP
        l_col_val :=null;
        FETCH v_EMP_cursor INTO l_col_val,l_ename_val;
        EXIT WHEN v_EMP_cursor%NOTFOUND;
          dbms_output.put_line('l_col_name='||l_col_name ||'  empno ='||l_empno);
       END LOOP;
    CLOSE v_EMP_cursor;
    END;
    END LOOP;
    END LOOP;
    END;

    user1758353 wrote:
    Thanks Billy, Would you be able to suggest any other faster method to load the data into table. Thanks,
    As Mark responded - it all depends on the actual data to load, structure and source/origin. On my busiest database, I am loading on average 30,000 rows every second from data in external files.
    However, the data structures are just that - structured. Logical.
    Having a data structure with 100's of fields (columns in a SQL table), raise all kinds of questions about how sane that structure is, and what impact it will have on a physical data model implementation.
    There is a gross misunderstanding by many when it comes to performance and scalability. The prime factor that determines performance is not how well you code, what tools/language you use, the h/w your c ode runs on, or anything like that. The prime factor that determines perform is the design of the data model - as it determines the complexity/ease to use the data model, and the amount of I/O (the slowest of all db operations) needed to effectively use the data model.

Maybe you are looking for

  • How do I set file path for "on my mac" folders in Mail 4.3

    Hi, is it possible to set a custom location for where the emails are stored when in "on my mac" folders? This would solve all my problems! Basically, I have some folders on my local hard drive which sync with Dropbox and I'm thinking there might be a

  • Every time I buy an app it also adds $1 to the charge

    It happend to me so many times, I thought it was an error but after I bought 3 apps it keps charging for the app and an extra $1 charge. For example ' I bought an app that costs $1.99 it ads another $1 to the charge which makes it $2.99" Can you help

  • Muse site doesn't display properly

    Here's the deal.  I created a site in muse, then exported to html.  Muse created index.html inside of the folder where the muse doc is located.  That's exactly what I expect.  Now, when I ftp to my webserver, all I get is plain text.  I don't think t

  • Cannot Convert PDF to Word because the "document is too large". What can I do?

    I am trying to convert a document to word from PDF, but acrobat.com keeps returning a message that the 'document is too large'. What are some our options?

  • Creating the Best Video Quality for YouTube Using FCE

    Hey guys, have a quick question.  I'm making drum covers for YouTube, and I want the best possible quality for my viewers.  I use Final Cut Express 4.0.1 on my MacBook Pro, and Log and Transfer the video footage from a HDR-CX160 Sony AVCHD Handycam.