Thursday, March 22, 2012
detecting renderer .
however if there are any labels, then the HTML will show the
documentmap, SO
i want to write expressions for hte grouping labels that show nothing
unless it is rendering to PDF, but how do you detect the renderer from
expression code?
KarlIt is not possible to detect the renderer from an expression inside the
report.
Just consider the cases where a report is rendered from an execution or a
history snapshot. In both cases the report does not get reprocessed, but
just rendered from the snapshot. Snapshots contain the processed report
items in an output format independent format. Hence, during executing
expressions the renderer can't be determined.
There are some ways to achieve what you want, but they all involve
additional work (e.g. write your own application which requests different
reports depending on output format)
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"Karl Prosser" <klumsy@.xtra.co.nz> wrote in message
news:7156880c.0503031437.2d9032ab@.posting.google.com...
> basically i only want a document map when rendering to PDF
> however if there are any labels, then the HTML will show the
> documentmap, SO
> i want to write expressions for hte grouping labels that show nothing
> unless it is rendering to PDF, but how do you detect the renderer from
> expression code?
> Karl
Wednesday, March 7, 2012
Destination Spreadsheets in SSIS
Hello -
I am dealing with SSIS in VS 2005... Trying to convert all my DTS packages... So, basically all my packages will extract some information from a database and load the results into a spreadsheet.
To start I am trying to do a TOP 1 returning a string from the db... The first row has column names but mysteriously the package will start to write the expected results in the third row instead of the second one. The second row will remain blank and if I do a preview against the destination spreadsheet within the pkg I will see a NULL value in the second row and then in the third row I will see the string I was expecting.
Tried the following with no success:
Regedit.exe, in Hkey_Local_Machine/Software/Microsoft/Jet/4.0/Engines/Excel/ do TypeGuessRows=8, ImportMixedTypes=Text, AppendBlankRows=0, FirstRowHasNames=Yes
Any help would be really appreciated.
I noticed some people are looking at this topic but no answers up to now... Should I add more information so that maybe you can help?
Thank you.
|||I just tried it and it works for me. Header in row 1 and data starts and ends in row 2.
Please post more info.
|||I have not seen that behavir either. Perhaps is something in the source table that is causing that. Have you tried deleting the excel file and the connection manager and creating them back?|||I am still having issues... I’ve created a new package using table pubs.dbo.jobs. I added a "data flow task" to the "control flow" pane. After that, I went to "data flow" pane and added a "OLE DB source" which has the sql query "select top 1 job_id from pubs.dbo.jobs", returning 1.
I also added an "Excel Destination" which points to a spreadsheet with a column header called "ID". All cells in this xls are defined as numbers... After that, I tied both components in the "data flow" pane and ran the package (in the connections managers I have the OLE DB and the excel connections).
SSIS succeeded and loaded 1 into the destination spreadsheet, in the third row instead of the second one. When I go to the "Excel Destination", hit edit and do preview, I see ID in the first row (which is the column name), NULL in the second row (unexpected) and 1 in third row (results from sql query).
Please, advise. Thanks in advance.
|||I follow the same steps you described and everything looks right. So, How are you creating the excel file? I created a connection manager to an unexisting excel file; the in the Excel destination componnet I use the New button in the 'name of the excel sheet' to create it as suggested by SSIS. Perhaps you are pointing to an existing file that has something in it it that causes the behaivor you see.|||
Rafael -
This does solve the problem. I really use an existing spreadsheet so I believe something wrong was happening while trying to match data types (sql query X existing destination sheet).
However, I really need to use this existing spreadsheet... This is because when my package is executed, the destination file is overwritten by a copy of the empty template file with column headers only. So the template file should be used... This is the way I always did for DTS packages and I'm wondering if you have another way to append the destination file in the second row always (otherwise file will keep growing).
BTW, these conversion data type "issues" in SSIS are a pain... I just don't understand why Microsoft eliminated implicit conversions (we have to convert explicitly in the sql query or either add a Data Conversion transform task mostly when we deal with strings - more to be done in addition to all that we have on our plate already). I miss DTS packages...
Please, reply with any comment on how to deal with the problem I have using an existing excel sheet. When I create a new one as suggested by SSIS, it does solve the NULL issue but I need to use a template file to assure I am starting in the second row always.
Thanks for your help.
|||
Gabriel Souza wrote:
BTW, these conversion data type "issues" in SSIS are a pain... I just don't understand why Microsoft eliminated implicit conversions (we have to convert explicitly in the sql query or either add a Data Conversion transform task mostly when we deal with strings - more to be done in addition to all that we have on our plate already). I miss DTS packages...
They may create a little more work upfront, but I'll readily trade that for not having to deal with implicit conversions. I've seen too many tools that use implicit conversions guess wrong about the datatypes, and you get no errors, no warnings. I've seen input files get columns re-ordered, but due to implicit conversions, the import process didn't throw any errors and the wrong data was imported for two weeks before a user questioned the values they were getting.
An explicit conversion is a lot safer for a production quality application.
|||If you use an existing excel, the Excel destination will APPEND to the file. So if it detects that the original excel has 3 rows, it will write from the 4th row.
Even if your rows 2,3 look empty to you in the excel, they may have been used previously, and may have formatting/comments etc.Excel will not see these rows as empty.
One good way to check is to open the excel, and press CTRL_Home and then CTRL_Shift_End. If rows 2 and three are being marked, then that means excel detected three rows.
To solve this problem, mark A2 to Ctrl_Shift_End and delete cells (move cells up), then save the excel. Then check again to see if row 2 and 3 are being marked.
I had this problem, and it got solved when I edited the excel properly.
|||Karfast -
Thanks for the reply but this does not help... The template file is in the correct format and we can't have manual work here...
The packages are supposed to run daily and they can't keep appending because they will grow indefinitely. This is why we use the template file and copy it to overwrite the destination file. The file will then be empty and the package will start in the second row again (at least this is the way we always did for dts's).
The problem I have is.... When this template file is copied the NULL issue happens again and it looks like SSIS gets lost with the data types because I did not create the excel sheet by clicking the "new" button in the "excel destination editor" as Rafael suggested (since I am dealing with the new template file by the time package runs).
Rafael suggestion does solve the problem with the unexpected NULL (because the destination sheet is created as SSIS suggests when we click the "new" button) but when the template file is copied the issue is raised again... This process of appending the file in the second row always should be automatic and we can't rely on manual deletions. As I said, the template file is formatted correctly.
Let me know your thoughts.
Thanks a lot.
|||
Gabriel on a recent project we ran into a nearly identical issue. At the end of updating one of our dimensions we needed to output the results to reflect any changes so our Excel and database table were completely in sync. We solved the issue using a template Excel file, but the template followed these standards:
All cells formatted as text
No forumulas
Header row only
All rows below header deleted initially to avoid any previous entries
Once the template was created it was untouched. Within our SSIS package we used a File System Task to copy the template over the existing file then in our data flow we took our source then ran it through data conversion (unicode string 255 or double precision floats) and output it to our new excel file. We used an Excel destination to map to our overwritten file and for the file task we had a before (template) and after (destion).
I can send you the package and template if you are still having difficulty but the lessons we learned were the formatting of the Excel template and matching the data types from the database to the Excel destination.
Hope this helps! Good luck!
|||Hi Gabriel,
I meant that the template file should be correct, and that is the only manual operation.
Now if the template has col headers in row 1, and data in its second and third rows, all destination files will also have it. And hence, your first real row will be in the 4th row.
Please check your template file.
I had exactly the same scenario, and same problem, and it got solved.
HTH
Kar
|||ADMariner -
Your suggestions are pretty good and I see you truly understood the issue I was having. I tried to apply these steps but unfortunately I was still having the issue... Even making sure all the 60,000 rows after the first row were being deleted properly, the NULL value would still show up.
Only thing that solved the problem was to create the "destination" sheet by clicking the "new button" in the "excel destination editor" as Rafael suggested. As soon as I applied this same trick to the template file and pointed the SSIS package back to the destination spreadsheet, the issue went away.
I really appreciate all the help from everybody. Thanks a lot
|||
ADMariner wrote:
We solved the issue using a template Excel file, but the template followed these standards:
All cells formatted as text
No forumulas
Header row only
All rows below header deleted initially to avoid any previous entries
Once the template was created it was untouched. Within our SSIS package we used a File System Task to copy the template over the existing file then in our data flow we took our source then ran it through data conversion (unicode string 255 or double precision floats) and output it to our new excel file. We used an Excel destination to map to our overwritten file and for the file task we had a before (template) and after (destion).
After you wrote to the template and then opened it in Excel, were your floats represented as numbers or text? I haven't been able to get it to write floats to an existing file (header row only) that remain formatted as numbers in Excel. Instead, they're text with the related side-effects (left justified, green triangle in upper left warning about a possible number in a text field, conditional formatting doesn't work correctly, etc.) I tried having the Excel destination create the spreadsheet, and then it DID write the floats as numbers, and would continue appending numbers on subsequent runs, but when I removed the data from that file (leaving the header) and tried to write to it again, the floats were then written as text.
Saturday, February 25, 2012
Designing a database within a database... design question storing data...
sure lots have you have done this before in some form or another).
At the end of the day, I'm storing the actual data generically in a
column of type nvarchar(4000), but I want to add support for unlimited
text. I want to do this in a smart fashion. Right now I am leaning
towards putting 2 nullable Value fields:
ValueLong ntext nullable
ValueShort nvarchar(4000) nullable
and dynamically storing the info in one or the other depending on the
size. ASP.NET does this exact very thing in it's Session State model;
look at the ASPStateTempSessions table. This table has both a
SessionItemShort of type varbinary (7000) and a SessionItemLong of type
Image.
My question is, is it better to user varbinary (7000) and Image? I'm
thinking maybe I should go down this path, simply because ASP.NET does,
but I don't really know why. Does anyone know what would be the benifit
of using varbinary and Image datatypes? If it's just to allow saving of
binary data, then I don't really need that right now (and I don't think
ASP.NET does either). Are there any other reasons?
thanks,
dave>I have a system that basically stores a database within a database (I'm
> sure lots have you have done this before in some form or another).
Please explain. What form is the data you are storing? If it isn't
represented relationally then why use SQL Server?
--
David Portas
SQL Server MVP
--|||Dave (chakachimp@.yahoo.com) writes:
> My question is, is it better to user varbinary (7000) and Image? I'm
> thinking maybe I should go down this path, simply because ASP.NET does,
> but I don't really know why. Does anyone know what would be the benifit
> of using varbinary and Image datatypes? If it's just to allow saving of
> binary data, then I don't really need that right now (and I don't think
> ASP.NET does either). Are there any other reasons?
Depends on the data you are storing. Since you talk about a "database with
a database", my initial reaction was you would use image, since I assumed
that the database is a binary file, complete with indexes, integer numbers,
and whatever.
But if the "database" is represented in text, for instance an XML document,
then there is no reason to use binary datatypes.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||I'm sorry, I need to elaborate. When I say database within a database,
I don't meen storing the actual database in a binary column or storing
XML in a column, instead I mean defining the structure of data within a
set of tables.
Instead of a concrete table such as Member with 3 columns: MemberID
int, FirstName varchar(25), LastName varchar(25), it is defined as an
abstract table that's defined across a series of tables. One row of any
of my abstract table actually lives (potentially) in several rows of a
sort of "Value" table. This "Value" table contains one column
(Varchar(4000)) that actually stores the value of the data item.
In our system we have over 15 abstract objects (Member being one of
them), so I know people will begin to question the architecture, but
that is not my point here... We do this for many reasons
1) We must store history on all changes (we write medical software)
2) We must encrypt the data and this allows a generic way to do this
(just flip a bit)
3) Our application will soon allow it's users to create user-defined
table and this is set up perfectly for that since it would only require
DML to achieve this (not DDL)
4) Speed isn't that important, right now our product has 10 users max.
Even if it became an issue we could solve this easily...
thanks,
dave|||Dave (chakachimp@.yahoo.com) writes:
> I'm sorry, I need to elaborate. When I say database within a database,
> I don't meen storing the actual database in a binary column or storing
> XML in a column, instead I mean defining the structure of data within a
> set of tables.
> Instead of a concrete table such as Member with 3 columns: MemberID
> int, FirstName varchar(25), LastName varchar(25), it is defined as an
> abstract table that's defined across a series of tables. One row of any
> of my abstract table actually lives (potentially) in several rows of a
> sort of "Value" table. This "Value" table contains one column
> (Varchar(4000)) that actually stores the value of the data item.
> In our system we have over 15 abstract objects (Member being one of
> them), so I know people will begin to question the architecture, but
> that is not my point here... We do this for many reasons
> 1) We must store history on all changes (we write medical software)
> 2) We must encrypt the data and this allows a generic way to do this
> (just flip a bit)
> 3) Our application will soon allow it's users to create user-defined
> table and this is set up perfectly for that since it would only require
> DML to achieve this (not DDL)
> 4) Speed isn't that important, right now our product has 10 users max.
> Even if it became an issue we could solve this easily...
Thanks for the elaboration, but I am not sure that this really provided
any more actual useful information to answer the question. "The database
within in a database", is thuse some sort of object that cannot be described
in a single table - nothing strange with that Order + OrderDetails is a
classic example.
But if I remove the veil about databases within database, and just take
the core question of yours: what datatype should use to save text data,
the answer is (n)varchar or (n)text, depening on your need to support
Unicode and the size limits of the data.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Let me completely rephrase my approach...
If you've ever installed the sql data model for ASP.NET that resides in
sql server, you'll notice that Microsoft has a table called
ASPStateTempSessions. There are two columns that hold the encrypted
session data of the user. These two columns are:
varbinary(7000)
Image
and they are each nullable. Depending on the size of the Session data,
one or the other column is used since Blob columns (such as Image,
Text, etc...) are inefficient. Using the Session in ASP.NET you'll
notice that it consists of strings only, so why did Microsoft decide to
use these types? Is there some effieciency thing? Or were they planning
on simply supporting possible binary data in the future.
-dave|||Dave (chakachimp@.yahoo.com) writes:
> If you've ever installed the sql data model for ASP.NET that resides in
> sql server, you'll notice that Microsoft has a table called
> ASPStateTempSessions. There are two columns that hold the encrypted
> session data of the user. These two columns are:
> varbinary(7000)
> Image
> and they are each nullable. Depending on the size of the Session data,
> one or the other column is used since Blob columns (such as Image,
> Text, etc...) are inefficient. Using the Session in ASP.NET you'll
> notice that it consists of strings only, so why did Microsoft decide to
> use these types? Is there some effieciency thing? Or were they planning
> on simply supporting possible binary data in the future.
Sorry, I have zero knowledge about ASP .Net, so I cannot answer any
question about its design.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Been Working on the AspState database, some information indicates that
the transfer of the string data is being done as a binary stream for
efficiency, thus requiring a binary db datatype to store it.
*** Sent via Developersdex http://www.developersdex.com ***
Friday, February 17, 2012
Design idea Help for Database
of about 35,000 people, a second one with 8000 offices, and a third
table of around 400,000 transactions done by those 35,000 people in
those 8000 offices. We get new data everyday that just updates the
existing tables with the updated rosters and transactions.
I want to build a quick website where our recruiters can look up those
people and keep contact info and all that fun stuff but also pull
numbers on those people. Like
* How Many Transactions that Sales Agent did last year
* Rosters by office showing production
The goal is to click the users name and see all the percentages,
commissions and data like that which we will get by searching that
table of transactions by the agents ID.
the problems I see right away are stuff like
* If I were to pull a report showing all agents in a single office with
their number of transactions next to their name, that is a HUGE query.
It would have to search the 400k worth of records for each of the
agents on just that one report.
A suggestions I was given
I was told by a fellow programmer a better way to do this is to have an
additional table that houses stats info and have the SQL server run
automated reports everyday at say midnight where it updates that table.
This table could show stuff like
** number of transactions for each user
** avg sales price on all transactions for each user
** avg commission on transaction for each user
let me know your thoughts
thanks in advance
Monkey Girlbob1barker@.yahoo.com wrote:
> we are creating a database of sales agents. Basically I have a table
> of about 35,000 people, a second one with 8000 offices, and a third
> table of around 400,000 transactions done by those 35,000 people in
> those 8000 offices. We get new data everyday that just updates the
> existing tables with the updated rosters and transactions.
> I want to build a quick website where our recruiters can look up those
> people and keep contact info and all that fun stuff but also pull
> numbers on those people. Like
> * How Many Transactions that Sales Agent did last year
> * Rosters by office showing production
> The goal is to click the users name and see all the percentages,
> commissions and data like that which we will get by searching that
> table of transactions by the agents ID.
> the problems I see right away are stuff like
> * If I were to pull a report showing all agents in a single office with
> their number of transactions next to their name, that is a HUGE query.
> It would have to search the 400k worth of records for each of the
> agents on just that one report.
> A suggestions I was given
> I was told by a fellow programmer a better way to do this is to have an
> additional table that houses stats info and have the SQL server run
> automated reports everyday at say midnight where it updates that table.
> This table could show stuff like
> ** number of transactions for each user
> ** avg sales price on all transactions for each user
> ** avg commission on transaction for each user
> let me know your thoughts
> thanks in advance
> Monkey Girl
You didn't specify how quickly your data is growing but less than 1
million rows is a small database by most standards and probably isn't
going to be much trouble under any decent SQL Server implementation.
Pre-aggregating the data may be worthwhile but SQL Server has tools to
do that for you: Reporting Services or Analysis Services. It would be a
waste to create your own summary tables and then write the code just to
update them once a day.
> It would have to search the 400k worth of records for each of the
> agents on just that one report.
Not if you know what an index is... :-)
Hope this helps.
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--